The dramatic success of Artificial Intelligence applications has been accompanied by more complexity, which makes its comprehension for final users more difficult and damages trustworthiness as a result. Within this context, the emergence of Explainable AI aims to make intelligent systems decisions and internal processes more comprehensible to human users. In this paper, we propose a framework for the explanation in natural language of predictive inference in Bayesian Networks (BN) to non-specialized users. The model uses a fuzzy syllogistic model for building a knowledge base made up of binary quantified statements that make explicit in a linguistic way all the relevant information which is implicit in a BN approximate reasoning model. Through a number of examples, it is shown how the generated explanations allow the user to trace the inference steps in the approximate reasoning process in predictive Bayesian Networks.
Keywords: Content Determination in natural language generation, Linguistic descriptions, Fuzzy syllogism.