Fingrams: Visual representations of fuzzy rule-based inference for expert analysis of comprehensibility
Since Zadeh's proposal and Mamdani's seminal ideas, interpretability is acknowledged as one of the most appreciated and valuable characteristics of fuzzy system identification methodologies. It represents the ability of fuzzy systems to formalize the behavior of a real system in a human understandable way, by means of a set of linguistic variables and rules with a high semantic expressivity close to natural language. Interpretability analysis involves twomain points of view: readability of the knowledge base description (regarding complexity of fuzzy partitions and rules) and comprehensibility of the fuzzy system (regarding implicit and explicit semantics embedded in fuzzy partitions and rules, as well as the fuzzy reasoning method). Readability has been thoroughly treated by many authors who have proposed several criteria and metrics. Unfortunately, comprehensibility has usually been neglected because it involves some cognitive aspects related to human reasoning, which are very hard to formalize and to deal with. This paper proposes the creation of a new paradigm for fuzzy system comprehensibility analysis based on fuzzy systems' inference maps, so-called fuzzy inference-grams (fingrams), by analogy with scientograms used for visualizing the structure of science. Fingrams show graphically the interaction between rules at the inference level in terms of co-fired rules, i.e., rules fired at the same time by a given input.The analysis of fingrams offers many possibilities: measuring the comprehensibility of fuzzy systems, detecting redundancies and/or inconsistencies among fuzzy rules, identifying the most significant rules, etc. Some of these capabilities are explored in this study for the case of fuzzy models and classifiers.©2013 IEEE.
keywords: Comprehensibility analysis, Expert analysis, Fuzzy modeling, Information visualization, Interpretability-accuracy tradeoff, Social network analysis (SNA).P