Designing Trustworthy AI Systems for Human-Centric Collaboration
Artificial Intelligence (AI) systems are increasingly embedded in complex, high-stakes sectors such as healthcare, finance, and telecommunications, fundamentally altering the nature of human-AI collaboration. Although intelligent technologies hold great promise for efficiency and innovation, their widespread adoption faces critical challenges, notably around the establishment and maintenance of human trust. Trust is widely recognised as a cornerstone of effective human-AI interactions, as users must rely on AI systems to perform reliably, ethically, and transparently. However, trust in AI remains inadequately understood, although an increasing number of scholars are actively investigating its definitions, dimensions, and implications. Addressing these challenges is particularly timely given regulatory developments, such as the EU AI Act, which prioritise transparency, explainability, fairness, and accountability. There is a compelling need for standardised frameworks and tools to systematically understand, evaluate, and improve trustworthiness in human-AI collaborations. This doctoral research aims to conceptually unpack trust, develop standardised evaluation metrics, and propose actionable design strategies for trustworthy AI systems, ultimately fostering more effective and widely accepted human-AI collaborations.
keywords: Artificial intelligence (AI), Human-Computer Interaction, Human-centered Artificial Intelligence