Ensuring that data-driven decisions in process mining are based on clearly grounded and properly understood analyses is crucial. We map key aspects of AI explainability within the context of process mining, drawing on both academic research (i.e., Explainable AI, Human-Computer Interaction, AI ethics) and current business practices. We establish as a baseline explainability regulatory requirements in contemporary governmental and industry policy documents and standards to ensure their compliance and trustworthiness. By the map, we discuss a methodology for improving the operationalization and clients' reception of explanations in process mining, geared toward further user cognitive studies and business validations. The goal is to advance state of the art in process mining with respect to explainable AI governance by providing practical guidance for organizations to implement explainable services.