Congress 1318
Author/s
  • Conor Hennessy, Alberto Bugarín Diz, Ehud Reiter
Source
  • 2nd Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI) collocated with the 13th International Conference on Natural Language Generation (INLG). Dublin, Irlanda. 2020

Explaining Bayesian Networks in Natural Language: State of the Art and Challenges

In order to increase trust in the usage of Bayesian Networks and to cement their role as a model which can aid in critical decision making, the challenge of explainability must be faced. Previous attempts at explaining Bayesian Networks have largely focused on graphical or visual aids. In this paper we aim to highlight the importance of a natural language approach to explanation and to discuss some of the previous and state of the art attempts of the textual explanation of Bayesian Networks. We outline several challenges that remain to be addressed in the generation and validation of natural language explanations of Bayesian Networks. This can serve as a reference for future work on natural language explanations of Bayesian Networks.
Keywords: Explainable Artificial Intelligence, Bayesian Networks, Natural Language Generation
Canonical link