The Role of Lexical Alignment in Human Understanding of Explanations by Conversational Agents

Explainable Artificial Intelligence (XAI) focuses on research and technology that can explain an AI system’s functioning and its underlying methods, and also on making these explanations better through personalization. Our research study investigates a natural language personalization method called lexical alignment in understanding an explanation provided by a conversational agent. The study setup was online and navigated the participants through an interaction with a conversational agent. Participants faced either an agent designed to align its responses to those of the participants, a misaligned agent, or a control condition that did not involve any dialogue. The dialogue delivered an explanation based on a predefined set of causes and effects. The recall and understanding of the explanations was evaluated using a combination of Yes-No questions, a Cloze test (fill-in-the-blanks), and What-style questions. The analysis of the test scores revealed a significant advantage in information recall for those who interacted with an aligning agent against the participants who either interacted with a non-aligning agent or did not go through any dialogue. The Yes-No type questions that included probes on higher-order inferences (understanding) also reflected an advantage for the participants who had an aligned dialogue against both non-aligned and no dialogue conditions. The results overall suggest a positive effect of lexical alignment on understanding of explanations.

keywords: Explainable Artificial Intelligence, lexical alignment, lexical entrainment, human-machine interaction