Paving the way towards counterfactual generation in argumentative conversational agents

Counterfactual explanations present an effective way to interpret predictions of black-box machine learning algorithms. Whereas thereis a significant body of research on counter-factual reasoning in philosophy and theoretical computer science, little attention has been paid to counterfactuals in regard to their explanatory capacity. In this paper, we review methods of argumentation theory and natural language generation that counterfactual explanation generation could benefit from most and discuss prospective directions for further re-search on counterfactual generation in explainable Artificial Intelligence.