This Special Issue is supported by the IEEE CIS Task Force on Explainable Fuzzy Systems, with the aim of providing readers with a holistic view of fundamentals and current research trends in the XAI field, paying special attention to fuzzy-grounded knowledge representation and reasoning but also regarding ways to enhance human-machine interaction through multi-modal (e.g., graphical or textual modalities) effective explanations. Thus, the scope of this special issue is not limited to the community of researchers in Fuzzy Logic but it is open to contributions by researchers, from both academy and industry, working in the multidisciplinary field of XAI. As a result, this special issue attracted 31 submissions which reported state-of-the-art contributions on the latest research and development, up-to-date issues, challenges, and applications in the field of XAI. Following a rigorous peer review process, five papers are finally accepted for publication.
Keywords: explainability, trustworthiness