Abstract
In the age of machine learning, deep learning and artificial intelligence (AI) are expected to improve our lives. Particularly in the field of medicine and medical imaging, AI can make sense of tens if not hundreds of different parameters and find patterns and correlations that are difficult for humans to process. AI is expected to assist doctors in improving patient care and reducing burden. Despite many papers showing how AI algorithms can match or outperform humans in different domains of medicine, not many have been adopted into practice (Kelly et al., 2019). One of the major challenges is trust and acceptance of AI results. These are important issues that are complex. Confidence, trust, and uncertainty influence the way humans make decisions using AI. AI (deep learning algorithms in particular) is a “black box” to users and even the creators of these algorithms, making it very difficult to adopt. Should humans trust AI? Do humans overly trust AI? This chapter explores the human–AI relationship. It starts with a discussion on trust and human interactions. The expert–apprentice model is described to inform how AI could interact with clinicians. Recent technological developments and experience design aspects are detailed, giving an outline of recommendations for designing explainable AI, or XAI.
Original language | English |
---|---|
Title of host publication | Explainable AI in Healthcare |
Subtitle of host publication | Unboxing Machine Learning for Biomedicine |
Editors | Mehul S. Raval, Mohendra Roy, Tolga Kaya, Rupal Kapdi |
Publisher | CRC Press |
Pages | 1-22 |
Number of pages | 22 |
ISBN (Electronic) | 9781000906394 |
ISBN (Print) | 9781032367118 |
DOIs | |
Publication status | Published - 17 Jul 2023 |
Bibliographical note
Publisher Copyright:© 2024 selection and editorial matter, Mehul S Raval, Mohendra Roy, Tolga Kaya, Rupal Kapdi; individual chapters, the contributors.