Assessing and Quantifying Perceived Trust in Interpretable Clinical Decision Support

Onderzoeksoutput: Bijdrage aan congresPaperAcademic

Samenvatting

Technical and ethical concerns impede the establishment of trust among healthcare professionals (HCPs) in developing artificial intelligence (AI)-based decision support. Yet, our understanding of trust models is constrained, and a standard accepted approach to evaluating trust in AI models is still lacking. We introduce a novel methodology to assess and quantify HCPs’ perceived trust in an interpretable machine learning model that serves as clinical decision support for diagnosing COVID-19 cases. Our approach leverages fuzzy cognitive maps (FCMs) to elicit and quantify HCPs’ trust mental models for understanding trust dynamics in clinical diagnosis. Our study reveals that HCPs rely predominantly
on their own expertise when interacting with the developed interpretable clinical decision support. Although the model’s interpretations offer limited assistance in diagnostic tasks, they facilitate the HCPs’ utilization of it. However, the impact of these interpretations on the establishment of perceived trust varies among HCPs, which can lead to an increase in trust for some while decreasing it for others. To validate quantified perceived trust, we employ the degree of agreement
metric, which quantitatively assesses whether HCPs lean more towards their own expertise or rely on the model’s recommendations in diagnostic tasks. We found significant alignment between the conclusions of the two metrics, indicating successful modeling and quantification of perceived trust. Plus, a moderate to strong positive correlation between the two metrics confirmed this conclusion. This means that FCMs can quantify HCPs’ perceived trust, aligning with their actual diagnostic advice shift after interacting with the model.
Originele taal-2Engels
StatusGeaccepteerd/In druk - 24 mrt. 2025
EvenementThe 3rd World Conference on eXplainable Artificial Intelligence - Istanbul, Turkije
Duur: 9 jul. 202511 jul. 2025

Congres

CongresThe 3rd World Conference on eXplainable Artificial Intelligence
Land/RegioTurkije
StadIstanbul
Periode9/07/2511/07/25

Vingerafdruk

Duik in de onderzoeksthema's van 'Assessing and Quantifying Perceived Trust in Interpretable Clinical Decision Support'. Samen vormen ze een unieke vingerafdruk.

Citeer dit