Publications

Trustworthy AI

2024
TextFocus: Assessing the Faithfulness of Feature Attribution Methods Explanations in Natural Language Processing
Multi3Generation: Multitask, Multilingual, and Multimodal Language Generation
Introducing User Feedback-Based Counterfactual Explanations (UFCE)
Sexism Detection on a Data Diet
An operational framework for guiding human evaluation in Explainable and Trustworthy AI
Enriching Interactive Explanations with Fuzzy Temporal Constraint Networks
2023
Crawford, Kate (2023). Atlas de IA. Poder, política y costes planetarios de la inteligencia artificial Barcelona: Ned Ediciones
Investigating Human-Centered Perspectives in Explainable Artificial Intelligence
CL-XAI: Toward enriched Cognitive Learning with eXplainable Artificial Intelligence
Argumentative Conversational Agents for Explainable Artificial Intelligence
A Framework for the Automatic Description of Healthcare Processes in Natural Language: Application in an Aortic Stenosis Integrated Care Process
The role of Speculations for Explainable and Trustworthy Artificial Intelligence: A use case on Art Genre Classification
A Confusion Matrix for Evaluating Feature Attribution Methods
Aprendizaxe de materias de programación e intelixencia artificial con perspectiva de xénero
An Art Painting Style Explainable Classifier grounded on Logical and Commonsense Reasoning
AI literacy in K-12: a systematic literature review
Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence
The Role of Lexical Alignment in Human Understanding of Explanations by Conversational Agents
Trustworthy Artificial Intelligence in Alzheimer’s Disease: State of the Art, Opportunities, and Challenges
2022
A intelixencia artificial fiable: moda ou necesidade?
Providing female role models in STEM higher education careers, a teaching experience
XAS: Automatic yet eXplainable Age and Sex determination by combining imprecise per-tooth predictions
A Multistage Retrieval System for Health-related Misinformation Detection
FCE: Feedback based Counterfactual Explanations for Explainable AI
Perspectiva de género en Inteligencia Artificial, una necesidad
Explainable and Trustworthy Artificial Intelligence