An empirical study on interpretability indexes through multi-objective evolutionary algorithms

In the realm of fuzzy systems, interpretability is really appreciated in most applications, but it becomes essential in those cases in which an intensive human-machine interaction is necessary. Accuracy and interpretability are often conflicting goals, thus we used multi-objective fuzzy modeling strategies to look for a good trade-off between them. For assessing interpretability, two different interpretability indexes have been taken into account: Average Fired Rules (AFR), which estimates how simple the comprehension of a specific rule base is, and Logical View Index (LVI), which estimates how much a rule base satisfies logical properties. With the aim of finding possible relationships between AFR and LVI, they have been used in two independent experimental sessions against the classification error. Experimental results have shown that the AFR minimization implies the LVI minimization, while the opposite is not verified. © 2011 Springer-Verlag.

Palabras clave: fuzzy modeling, interpretability, interpretability indexes, multi-objective evolutionary algorithm