HILK++: An interpretability-guided fuzzy modeling methodology for learning readable and comprehensible fuzzy rule-based classifiers
This work presents a methodology for building interpretable fuzzy systems for classification problems. We consider interpretability from two points of view: (1) readability of the system description and (2) comprehensibility of the system behavior explanations. The fuzzy modeling methodology named as Highly Interpretable Linguistic Knowledge (HILK) is upgraded. Firstly, a feature selection procedure based on crisp decision trees is carried out. Secondly, several strong fuzzy partitions are automatically generated from experimental data for all the selected inputs. For each input, all partitions are compared and the best one according to data distribution is selected. Thirdly, a set of linguistic rules are defined combining the previously generated linguistic variables. Then, a linguistic simplification procedure guided by a novel interpretability index is applied to get a more compact and general set of rules with a minimum loss of accuracy. Finally, partition tuning based on two efficient search strategies increases the system accuracy while preserving the high interpretability. Results obtained in several benchmark classification problems are encouraging because they show the ability of the new methodology for generating highly interpretable fuzzy rule-based classifiers while yielding accuracy comparable to that achieved by other methods like neural networks and C4. 5. The best configuration of HILK will depend on each specific problem under consideration but it is important to remark that HILK is flexible enough (thanks to the combination of several algorithms in each modeling stage) to be easily adaptable to a wide range of problems. © 2010 Springer-Verlag.
keywords: Classification, Fuzzy modeling, Interpretability, Simplification, Tuning