Projecten per jaar
Samenvatting
Technical and ethical concerns impede the establishment of trust among healthcare professionals (HCPs) in developing artificial intelligence (AI)-based decision support. Yet, our understanding of trust models is constrained, and a standard accepted approach to evaluating trust in AI models is still lacking. We introduce a novel methodology to assess and quantify HCPs’ perceived trust in an interpretable machine learning model that serves as clinical decision support for diagnosing COVID-19 cases. Our approach leverages fuzzy cognitive maps (FCMs) to elicit and quantify HCPs’ trust mental models for understanding trust dynamics in clinical diagnosis. Our study reveals that HCPs rely predominantly
on their own expertise when interacting with the developed interpretable clinical decision support. Although the model’s interpretations offer limited assistance in diagnostic tasks, they facilitate the HCPs’ utilization of it. However, the impact of these interpretations on the establishment of perceived trust varies among HCPs, which can lead to an increase in trust for some while decreasing it for others. To validate quantified perceived trust, we employ the degree of agreement
metric, which quantitatively assesses whether HCPs lean more towards their own expertise or rely on the model’s recommendations in diagnostic tasks. We found significant alignment between the conclusions of the two metrics, indicating successful modeling and quantification of perceived trust. Plus, a moderate to strong positive correlation between the two metrics confirmed this conclusion. This means that FCMs can quantify HCPs’ perceived trust, aligning with their actual diagnostic advice shift after interacting with the model.
on their own expertise when interacting with the developed interpretable clinical decision support. Although the model’s interpretations offer limited assistance in diagnostic tasks, they facilitate the HCPs’ utilization of it. However, the impact of these interpretations on the establishment of perceived trust varies among HCPs, which can lead to an increase in trust for some while decreasing it for others. To validate quantified perceived trust, we employ the degree of agreement
metric, which quantitatively assesses whether HCPs lean more towards their own expertise or rely on the model’s recommendations in diagnostic tasks. We found significant alignment between the conclusions of the two metrics, indicating successful modeling and quantification of perceived trust. Plus, a moderate to strong positive correlation between the two metrics confirmed this conclusion. This means that FCMs can quantify HCPs’ perceived trust, aligning with their actual diagnostic advice shift after interacting with the model.
Originele taal-2 | Engels |
---|---|
Status | Geaccepteerd/In druk - 24 mrt. 2025 |
Evenement | The 3rd World Conference on eXplainable Artificial Intelligence - Istanbul, Turkije Duur: 9 jul. 2025 → 11 jul. 2025 |
Congres
Congres | The 3rd World Conference on eXplainable Artificial Intelligence |
---|---|
Land/Regio | Turkije |
Stad | Istanbul |
Periode | 9/07/25 → 11/07/25 |
Vingerafdruk
Duik in de onderzoeksthema's van 'Assessing and Quantifying Perceived Trust in Interpretable Clinical Decision Support'. Samen vormen ze een unieke vingerafdruk.Projecten
- 1 Actief
-
ENFIELD: European Lighthouse to Manifest Trustworthy and Green AI
Van Gorp, P. (Project Manager), Zhang, C. (Projectmedewerker), Grau Garcia, I. (Projectmedewerker) & Baer, G. (Projectmedewerker)
1/09/23 → 31/08/26
Project: Third tier