Special issue on interpretable fuzzy systems

Interpretability is acknowledged as one of the most appreciated advantages of fuzzy systems in many applications, especially in those with high human interaction where it actually becomes a strong requirement. However, it is important to remark that there is a somehow misleading but widely extended belief, even in part of the fuzzy community, regarding fuzzy systems as interpretable no matter how they were designed. Of course, we are aware the use of fuzzy logic favors the interpretability of designed models. Thanks to their semantic expressivity, close to natural language, fuzzy variables and rules can be used to formalize linguistic propositions which are likely to be easily understandood by human beings. Obviously, this fact makes easier the knowledge extraction and representation tasks carried out when modeling real-world complex systems. Notwithstanding, fuzzy logic is not enough by itself to guarantee the interpretability of the final model. As it is thoroughly illustrated in this special issue, achieving interpretable fuzzy systems is a matter of careful design because fuzzy systems cannot be deemed as interpretable per se. Thus, several constraints have to be imposed along the whole design process with the aim of producing really interpretable fuzzy systems, in the sense that every element of the whole system may be checked and understood by a human being. Otherwise, fuzzy systems may even become black-boxes. © 2011 Elsevier Inc. All rights reserved.

Palabras clave: Comprehensibility, Fuzzy logic, Intelligibility, Interpretability, Readability, Understandability