Evaluating How Explainable AI Is Perceived in the Medical Domain: A Human-Centered Quantitative Study of XAI in Chest X-Ray Diagnostics

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

57 Downloads (Pure)

Abstract

The crucial role of Explainable Artificial Intelligence (XAI) in healthcare is underscored by the need for both accurate diagnosis and transparency of decision making to improve trust in the decisions on the one hand and to facilitate its adoption by medical professionals on the other hand. In this paper, We present results of a quantitative user study to evaluate how widely used XAI methods are perceived by medical experts. For doing so, we utilize two prominent post-hoc model-agnostic XAI methods, i.e., Local Interpretable Model-agnostic Explanations (LIME) and Shapley Additive explanations (SHAP). For this study, a considerable cohort of 97 medical experts was recruited to investigate whether these XAI methods assist the medical experts in their diagnosis on Chest X-ray scans. We designed an evaluation framework to investigate diagnosis accuracy, trust change, coherence with expert reasoning, and confidence differences before and after seeing provided explanations of XAI. This large-scale study showed that both XAI methods improve scores on indicative explanations. The overall change in trust was not significantly different across LIME and SHAP, indicating that, there are other factors for trust enhancement in AI diagnostics beyond providing explanations. This work has proposed a robust, human-centered benchmark, supporting the research and development of interpretable, reliable, and clinically-aligned AI tools, and directing the future of AI in high-stakes healthcare applications towards enhanced transparency and accountability.

Original languageEnglish
Title of host publicationTrustworthy Artificial Intelligence for Healthcare
Subtitle of host publicationSecond International Workshop, TAI4H 2024, Jeju, South Korea, August 4, 2024, Proceedings
EditorsHao Chen, Yuyin Zhou, Daguang Xu, Varut Vince Vardhanabhuti
Place of PublicationCham
PublisherSpringer
Pages92-108
Number of pages17
ISBN (Electronic)978-3-031-67751-9
ISBN (Print)978-3-031-67750-2
DOIs
Publication statusPublished - 1 Aug 2024
Event2nd International Workshop on Trustworthy Artificial Intelligence for Healthcare, TAI4H 2024 - Jeju, Korea, Republic of
Duration: 4 Aug 20244 Aug 2024

Publication series

NameLecture Notes in Computer Science (LNCS)
Volume14812
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference2nd International Workshop on Trustworthy Artificial Intelligence for Healthcare, TAI4H 2024
Country/TerritoryKorea, Republic of
CityJeju
Period4/08/244/08/24

Keywords

  • Explainable AI
  • Human-Centered Evaluation
  • Medical Imaging
  • XAI Evaluation
  • XAI in Healthcare

Fingerprint

Dive into the research topics of 'Evaluating How Explainable AI Is Perceived in the Medical Domain: A Human-Centered Quantitative Study of XAI in Chest X-Ray Diagnostics'. Together they form a unique fingerprint.

Cite this