A User Study on the Utility of Context-Aware Explanations for Assisting Data Scientists in Error Analysis of Fuzzy Decision Trees

The current growth of interest in the field of eXplainable Artificial Intelligence has led to the rise of multiple explanation generation techniques to bridge the gap between the algorithmic complexity of the most powerful algorithms and their end users, who are expected to take advantage of them. In addition, explanations can assist with other tasks, such as explaining the algorithm to a model designer, who can use that information to fine-tune the system that implements such an algorithm. This gives the model designer some insight into the inner workings of the system to guide design decisions as well as identify and prevent potential errors, resulting in a myriad of improvements regarding model properties such as accuracy, explainability, trustworthiness, coherence with existing knowledge, etc. In this paper, we introduce a method to enrich local explanations of fuzzy decision tree classifications with context information. The goodness of the proposed method was validated with a user study in which 26 participants had to detect classification errors made by a fuzzy decision tree. Reported results show that enriched explanations allowed the participants to be more accurate in detecting errors in the classification system, albeit taking longer to solve the task with respect to participants who were shown regular local explanations.

keywords: Fuzzy Decision Tree, Explainable Artificial Intelligence, ,