Investigating Human-Centered Perspectives in Explainable Artificial Intelligence

The widespread use of Artificial Intelligence (AI) in various domains has led to a growing demand for algorithmic understanding, transparency, and trustworthiness. The field of eXplainable AI (XAI) aims to develop techniques that can inspect and explain AI systems’ behaviour in a way that is understandable to humans. However, the effectiveness of explanations depends on how users perceive them, and their acceptability is connected with the level of understanding and compatibility with users’ existing knowledge. So far, researchers in XAI have primarily focused on technical aspects of explanations, mostly without considering users’ needs, and this aspect is necessary to consider for a trustworthy AI. In the meantime, there is a growing interest in human-centered approaches that focus on the intersection between AI and human-computer interaction, what is termed as human-centered XAI (HC-XAI). HC-XAI explores methods to achieve user satisfaction, trust, and acceptance for XAI systems. This paper presents a systematic survey on HC-XAI, reviewing 75 papers from various digital libraries. The contributions of this paper include: (1) identifying common human-centered approaches, (2) providing readers with insights into design perspectives of HC-XAI approaches, and (3) categorising with quantitative and qualitative analysis of all the papers under study. The findings stimulate discussions and shed light on ongoing and upcoming research in HC-XAI.

keywords: Artificial intelligence (AI), Explainable AI, Human-centered XAI, XAI Design Perspectives, Systematic Survey