Beyond Prediction Similarity: ShapGAP for Evaluating Faithful Surrogate Models in XAI

The growing importance of Explainable Artificial Intelligence (XAI) has highlighted the need to understand the decision-making processes of black-box models. Surrogation, emulating a black-box model (BB) with a white-box model (WB), is crucial in applications where BBs are unavailable due to security or practical concerns. Traditional fidelity measures only evaluate the similarity of the final predictions, which can lead to a significant limitation: considering a WB faithful even when it has the same prediction as the BB but with a completely different rationale. Addressing this limitation is crucial to develop Trustworthy AI practical applications beyond XAI. To address this issue, we introduce ShapGAP, a novel metric that assesses the faithfulness of surrogate models by comparing their reasoning paths, using SHAP explanations as a proxy. We validate the effectiveness of ShapGAP by applying it to real-world datasets from healthcare and finance domains, comparing its performance against traditional fidelity measures. Our results show that ShapGAP enables better understanding and trust in XAI systems, revealing the potential dangers of relying on models with high task accuracy but unfaithful explanations. ShapGAP serves as a valuable tool for identifying faithful surrogate models, paving the way for more reliable and Trustworthy AI applications.

keywords: Explainable Artificial Intelligence, Fidelity Measures, Surrogate Models, Interpretability assessment, faithfulness, Shapley Values