According to Polanyi's paradox, humans know more than they can explain, mainly due to the huge amount of implicit knowledge they unconsciously acquire trough culture, heritage, etc. The same applies for Artificial Intelligence (AI) systems mainly learnt automatically from data. However, in accordance with EU laws, humans have a right to explanation of decisions affecting them, no matter who (or what AI system) makes such decision. NL4XAI will train 11 creative, entrepreneurial and innovative early-stage researchers (ESRs), who will face the challenge of making AI self-explanatory and thus contributing to translate knowledge into products and services for economic and social benefit, with the support of Explainable AI (XAI) systems. Moreover, the focus of NL4XAI is in the automatic generation of interactive explanations in natural language (NL), as humans naturally do, and as a complement to visualization tools. As a result, ESRs are expected to leverage the usage of AI models and techniques even by non-expert users. Namely, all their developments will be validated by humans in specific use cases, and main outcomes publicly reported and integrated into a common open source software framework for XAI that will be accessible to all the European citizens. In addition, those results to be exploited commercially will be protected through licenses or patents. It is worthy to note that we have selected some of the most prominent European researchers (from both academy and industry) in each of the related fundamental topics and created a joint high quality training program that can be seen as a pyramid with the main research objective (designing and building XAI models) on top. It will be achieved as result of jointly addressing the research objectives in the pyramid base (NL generation and processing for XAI; Argumentation Technology for XAI; Interactive Interfaces for XAI). ESRs are also to be trained in Ethical and Legal issues, as well as in transversal skills.
The objective of the NL4XAI network is to train a new generation of scientists which could be the seed for the development of a more human-centred set of AI systems, facing current and future XAI challenges, through an interdisciplinary approach which merges AI, NL technology (NLP, NLG and argumentation) and HCI.