Abstract
Artificial Intelligence (AI) and Machine Learning (ML) technologies have become deeply embedded in decision support, shaping critical decision-making processes in healthcare, justice, and governance. While Deep Learning (DL) models, especially deep neural networks, have demonstrated exceptional predictive performance, their opaque ``black-box'' nature raises serious concerns regarding transparency, accountability, and user trust. This lack of interpretability limits the adoption and reliability of AI systems, especially in high-stakes applications where human oversight is essential. Recent advancements in eXplainable AI (XAI) domain have sought to address this issue by developing techniques that elucidate the internal logic of ML models in a human-understandable manner. However, the current landscape of XAI research has been primarily model-centric, focusing on enhancing predictive performance of the models rather than usability or human-centered evaluation. Moreover, while XAI promises greater transparency, how and to what extent these explanations influence user trust and decision-making in real-world scenarios remains unclear. This thesis addresses the critical gap between explainability and user trust in AI-assisted decision-making, with a focus on their development and use as decision support. It highlights three major research shortcomings: (i) the absence of systematic evaluation of explanations, with current studies relying largely on subjective assessments of their accuracy; (ii) limited understanding of the actual impact of explanations on user trust; and (iii) the lack of comprehensive, well-defined, and theoretically grounded methods for measuring trust in AI-assisted decision-making, leaving the field without a robust evaluation framework. By integrating insights from Human-Computer Interaction (HCI), complex systems modeling, and algorithmic transparency, this research proposes a human-centered approach to assess the trustworthiness of AI models by end-users. Building on these objectives, the studies in this thesis develop ML-based decision support and present their explanations to end-users within carefully designed decision-making tasks. These explanations are evaluated using established approaches from the literature—application-grounded, human-grounded, and functionally grounded—depending on each study’s objectives. The evaluations examine how explainability influences end-users’ decision-making and systematically model different paradigms of trust in AI. By adopting a clear and unified definition of trust, this thesis contributes practical methods for assessing and distinguishing trust paradigms, offering deeper insights into how explainability shapes user trust in AI-based decision support. In the first study, which focused on distal myopathy, we aimed to bridge the gap between application-grounded and functionally grounded evaluation of AI explainability. We initially employed functionally grounded approaches to objectively evaluate the quality of explanations generated by the architecture. Then, we collected subjective opinions from healthcare professionals about these explanations and examined how their perceptions of trust evolved after interacting with them. The second study, centered on COVID-19 cases, introduced a methodology to model and measure perceived trust—that is, the trust self-reported by users. This approach established a connection between how explanations are evaluated and how users form their perception of trust in the AI decision support. Finally, in the third study, we clearly distinguished between two major aspects of trust discussed in the literature: perceived trust (how much users report they trust on AI) and demonstrated trust (how much users actually rely on AI decisions in practice).In conclusion, this thesis makes a significant contribution to bridging the gap between algorithmic explainability and human trust in AI-based decision support. By systematically integrating objective and subjective evaluation methods across multiple case studies, it advances understanding of how different forms of explanation influence users’ perception and reliance on AI. The proposed human-centered evaluation framework provides a structured and theoretically grounded approach for assessing trust, offering both methodological rigor and practical value for future research.
| Original language | English |
|---|---|
| Qualification | Doctor of Philosophy |
| Awarding Institution |
|
| Supervisors/Advisors |
|
| Award date | 4 Nov 2025 |
| Place of Publication | Eindhoven |
| Publisher | |
| Print ISBNs | 978-90-386-6526-9 |
| Publication status | Published - 4 Nov 2025 |