Trustworthy Artificial Intelligence (AI) represents the line of research in AI, by and for people, with the aim of designing models and implementing intelligent technologies capable of interacting in a friendly, non-harmful and trustworthy way, being environmentally friendly and in accordance with international legislation, human rights and the principles of sustainable development. It is a multidisciplinary line of research that involves not only technological, but also ethical, legal, socio-economic and cultural aspects.
The result of our work includes the development of new intelligent models and technologies, infrastructures, software tools and international standards that can be put at the service of citizens, governments and companies, in order to democratise and facilitate the use of trustworthy AI.
Our research addresses the design of inherently interpretable AI models, optimising the balance between interpretability and accuracy and hybridising explicit knowledge with machine learning. We also work on the development of new approaches and methodologies to facilitate the interpretability and explainability of AI models by incorporating didactic and dialectical strategies, to design self-explanatory and sustainable AI models (reduction, reuse, and recycling of available resources), and to evaluate reliable AI models (protocols, metrics and tools) during their full development cycle.
Our work also focuses on the assessment, detection and prevention of all types of biases, whether they exist in the data, occur in the AI models or derive from the mechanisms for their evaluation, taking into account cognitive, cultural or other biases.