
International Podium for CiTIUS' Explainable Artificial Intelligence
A team from the centre achieved third place in the ‘IEEE FLAME Technical Challenge 2024,’ a global competition organized by the IEEE Computational Intelligence Society, which rewards innovative works aimed at combining language models with computational intelligence techniques to develop AI systems capable of generating reliable explanations.
A team of researchers from the Singular Research Centre in Intelligent Technologies of the University of Santiago de Compostela (CiTIUS) has achieved third place in the international IEEE FLAME Technical Challenge 2024. This competition, promoted and organized by the prestigious scientific society IEEE Computational Intelligence Society, seeks innovative ideas to explore new strategies for combining large language models (such as those used in generative AI tools or LLMs) with advanced computational intelligence techniques—a branch of AI that proposes and designs intelligent models inspired by nature, particularly biology and human language. Examples include artificial neural networks, evolutionary computation, or fuzzy logic.
The award-winning team, composed of José María Alonso, Pablo Miguel Pérez, Alejandro Catalá, and Alberto Bugarín, presented a project addressing one of the most significant challenges in AI today: how to generate precise, error-free explanations that are understandable for humans. One of the key challenges is avoiding the so-called “hallucinations” of generative AI models—a phenomenon where generated responses appear reliable but contain incorrect or fabricated information.
The finalist project, MAI-XAI-CiTIUS-USC, combines the power of language models with systems based on fuzzy logic, a technique in AI that allows machines to handle imprecise linguistic concepts such as “high,” “low,” or “approximately,” which are essential in human communication.
Through logical rules, such as 'If something very similar to A is observed, then we can predict that something very similar to B will happen', these systems transform the inherent subjectivity of human language into concrete numerical values. This grants models approximate reasoning (or common sense) and the ability to handle linguistic imprecision, improving their capacity to understand and respond to people in a natural and effective way. Additionally, the proposed architecture incorporates tools to retrieve relevant information from large databases and adapt responses to the specific context of each user.
A success linked to the XAI4SOC project
The award-winning work is part of the research project XAI4SOC (Explainable Artificial Intelligence for Healthy Ageing and Social Well-being), which develops advanced, human-centred AI tools with applications for vulnerable populations (such as older adults at risk of dementia and adolescents). XAI4SOC seeks to create explainable algorithms that empower users and promote their physical, mental, and social well-being.
The proposed solution combines LLMs with fuzzy logic, and thanks to its modular design, it can be adapted to different contexts, making it a useful tool in key sectors that demand clear and reliable AI-based explanations, such as healthcare and education.
Global recognition of Galician talent
During the competition evaluations, the project stood out for its ability to significantly reduce errors in the responses generated by the models, while producing concise and easy-to-understand explanations, avoiding the long narratives that often complicate the interpretation of generative AI systems. Additionally, the results demonstrated a high level of accuracy in the responses, comparable to or exceeding other proposed solutions.
The IEEE FLAME Technical Challenge, whose full name is Fusing Large lAnguage Models with computational intElligence, has attracted proposals from all over the world in this edition, of which only six reached the final after a rigorous selection process. The work of the CiTIUS team, which achieved third place, highlights the excellence of the research conducted in Galicia and reinforces the international profile of the centre. This recognition also underscores the importance of combining the expertise of established researchers with the emerging talent of young researchers like Pablo Miguel Pérez, a computer engineering graduate and Master's student in AI at the USC, who actively contributed to the project through his participation in the XAI4SOC project.
The project, explained in detail in a video created specifically for the recently concluded competition, aims to revolutionize the use of language models in critical contexts where explainability and reliability are essential, such as decision-making in healthcare, education, risk assessment in the financial sector, or data analysis in legal processes, among others.