Novel Methods for Bayesian Networks Construction and Explanation using Natural Language

My PhD research advances Explainable AI (XAI) for Bayesian Networks (BNs) by improving their construction, inference explanation, and user trust. I conducted the first systematic review of BN reusability, revealing major gaps in reusability. To address structure learning, I developed CausalGraphBench, a benchmark evaluating Large Language Models-driven BN construction. For inference explanation, I introduced the Factor Argument framework, which enhances natural language explanations and was validated in the medical domain. Finally, in a case study, I applied LLM-driven BN construction, generated explanations for real users, and collected feedback.

keywords: Explainable AI, Uncertainty