Trustworthy AI
Trustworthy Artificial Intelligence (AI) represents the line of research in AI, by and for people, with the aim of designing models and implementing intelligent technologies capable of interacting in a friendly, non-harmful and trustworthy way, being environmentally friendly and in accordance with international legislation, human rights and the principles of sustainable development. It is a multidisciplinary line of research that involves not only technological, but also ethical, legal, socio-economic and cultural aspects.
The result of our work includes the development of new intelligent models and technologies, infrastructures, software tools and international standards that can be put at the service of citizens, governments and companies, in order to democratise and facilitate the use of trustworthy AI.
Our research addresses the design of inherently interpretable AI models, optimising the balance between interpretability and accuracy and hybridising explicit knowledge with machine learning. We also work on the development of new approaches and methodologies to facilitate the interpretability and explainability of AI models by incorporating didactic and dialectical strategies, to design self-explanatory and sustainable AI models (reduction, reuse, and recycling of available resources), and to evaluate reliable AI models (protocols, metrics and tools) during their full development cycle.
Our work also focuses on the assessment, detection and prevention of all types of biases, whether they exist in the data, occur in the AI models or derive from the mechanisms for their evaluation, taking into account cognitive, cultural or other biases.
CiTIUS is affiliated with the Z-Inspection® Initiative. It adopts the Z-Inspection® process to the assessment of the trustworthiness of real-life AI systems and applications. Z-Inspection® is a holistic process used to evaluate the trustworthiness of AI-based technologies at different stages of the AI lifecycle. It focuses, in particular, on the identification and discussion of ethical issues and tensions through the elaboration of socio-technical scenarios. It uses the European Union’s High-Level Expert Group’s (EU HLEG) guidelines for trustworthy AI. The Z-Inspection® process is distributed under the terms and conditions of the Creative Commons (Attribution-Non Commercial-Share Alike CC BY-NC-SA) license. Z-Inspection® is listed in the new OECD Catalogue of AI Tools & Metrics.
Do you want to know more?
For collaborations, visits, etc. contact us.
Research Area
/research/areas/ia-fiable