Congress 1308
Author/s
  • Jose M. Alonso, Senen Barro, Alberto Bugarin, Kees van Deemter, Claire Gardent, Albert Gatt, Ehud Reiter, Carles Sierra, Mariet Theune, Nava Tintarev, Hitoshi Yano, Katarzyna Budzynska
DOI
Source
  • 1st Foundations of Trustworthy AI - Integrating Learning, Optimization and Reasoning Workshop. Santiago de Compostela, España. 2020

Interactive Natural Language Technology for Explainable Artificial Intelligence

We have defined an interdisciplinary program for training a new generation of researchers who will be ready to leverage the use of Artificial Intelligence (AI)-based models and techniques even by non-expert users. The final goal is to make AI self-explaining and thus contribute to translating knowledge into products and services for economic and social benefit, with the support of Explainable AI systems. Moreover, our focus is on the automatic generation of interactive explanations in natural language, the preferred modality among humans, with visualization as a complementary modality.
Keywords: Explainable Artificial Intelligence, Trustworthiness, Multimodal Explanations, Argumentative Conversational Agents, Human-centered Modeling
Canonical link