
HumanAIze: Toward Reliable and Transparent Large Language Models
HumanAIze is a coordinated research project aimed at advancing the development of large language models (LLMs) that are more reliable, transparent, and suited to multilingual contexts. The project combines expertise from artificial intelligence, linguistics, and legal studies to improve human–AI interaction, with special attention to linguistic diversity, ethics, and safety. Through collaboration among several research institutions, HumanAIze seeks to develop open technologies that contribute to more accessible, responsible, and socially aligned AI systems.
Objectives
The main objective of HumanAIze is to foster a new generation of language models that are more trustworthy, fair, and culturally aware. To achieve this, the project focuses on improving key aspects such as bias reduction, explainability and verification of model outputs, support for multiple languages—including low-resource languages—and compliance with ethical and legal principles. In addition, the project aims to develop open technologies that strengthen technological sovereignty and promote innovation in artificial intelligence across Europe.
Project
/research/projects/cara-a-llms-fidedignos-e-transparentes
<p>HumanAIze is a coordinated research project aimed at advancing the development of large language models (LLMs) that are more reliable, transparent, and suited to multilingual contexts. The project combines expertise from artificial intelligence, linguistics, and legal studies to improve human–AI interaction, with special attention to linguistic diversity, ethics, and safety. Through collaboration among several research institutions, HumanAIze seeks to develop open technologies that contribute to more accessible, responsible, and socially aligned AI systems.</p><p>The main objective of HumanAIze is to foster a new generation of language models that are more trustworthy, fair, and culturally aware. To achieve this, the project focuses on improving key aspects such as bias reduction, explainability and verification of model outputs, support for multiple languages—including low-resource languages—and compliance with ethical and legal principles. In addition, the project aims to develop open technologies that strengthen technological sovereignty and promote innovation in artificial intelligence across Europe.</p> - AIA2025-163322-C62 - Marcos Garcia González, Pablo Gamallo Otero - Nelly Condori Fernández, Senén Barro Ameneiro, Mario Ezra Aragón Saenzpardo, David Enrique Losada Carril, Anna Temerko, Daniel Cores Costa, Marcos Fernández Pichel, Marta Vázquez Abuín
projects_en