LLM Driven Justified Ontology Alignment
Ontology alignment is a key task for achieving semantic interoperability across heterogeneous knowledge graphs; yet, it remains a time-consuming and expert-driven process. Recent advances in Large Language Models (LLMs) offer new opportunities to automate this task, particularly in scenarios involving the parallel development of multiple ontologies by automatic means. In this paper, we propose an approach for the automatic generation of ontology alignments using LLMs, producing mappings that are compliant with the SSSOM standard and enriched with explicit justifications and provenance metadata that can support ontology developers and domain experts in the task of developing an ontology from multiple automatically generated versions. We design and evaluate a set of progressively refined prompts (early, extended, and improved) to guide LLMs in generating structured and explainable alignments. The approach is assessed using multiple state-of-the-art models (GPT-5.4, GPT-5 Mini, Gemini Flash, and Gemini Pro) on ground truth data from the OAEI Conference dataset. The evaluation combines structural validation, standard alignment metrics (precision, recall, F1-score), and expert qualitative analysis. The results show that LLMs can generate high-quality candidate mappings, particularly for lexically similar entities, and that prompt engineering significantly improves output consistency and compliance with formal schemas. However, limitations persist in semantic discrimination, predicate selection, and the exploitation of ontology structure. These findings indicate that LLMs are best suited as assistive tools for knowledge engineers and domain experts for managing parallel evolution of ontologies.
Palabras clave: Ontology Alignment, Ontology Matching, Large Language Models, Explainable AI
Publicación: Congreso
1775722567607
9 de abril de 2026
/research/publications/llm-driven-justified-ontology-alignment
Ontology alignment is a key task for achieving semantic interoperability across heterogeneous knowledge graphs; yet, it remains a time-consuming and expert-driven process. Recent advances in Large Language Models (LLMs) offer new opportunities to automate this task, particularly in scenarios involving the parallel development of multiple ontologies by automatic means. In this paper, we propose an approach for the automatic generation of ontology alignments using LLMs, producing mappings that are compliant with the SSSOM standard and enriched with explicit justifications and provenance metadata that can support ontology developers and domain experts in the task of developing an ontology from multiple automatically generated versions. We design and evaluate a set of progressively refined prompts (early, extended, and improved) to guide LLMs in generating structured and explainable alignments. The approach is assessed using multiple state-of-the-art models (GPT-5.4, GPT-5 Mini, Gemini Flash, and Gemini Pro) on ground truth data from the OAEI Conference dataset. The evaluation combines structural validation, standard alignment metrics (precision, recall, F1-score), and expert qualitative analysis. The results show that LLMs can generate high-quality candidate mappings, particularly for lexically similar entities, and that prompt engineering significantly improves output consistency and compliance with formal schemas. However, limitations persist in semantic discrimination, predicate selection, and the exploitation of ontology structure. These findings indicate that LLMs are best suited as assistive tools for knowledge engineers and domain experts for managing parallel evolution of ontologies. - Diego Conde-Herreros, George Hannah, Terry R. Payne, Jacopo de Berardinis, Valentina Tamma, David Chaves-Fraga and Oscar Corcho
publications_gl