Projects per year
Abstract
Research in Explainable AI (XAI) has shown that explanations can improve users’ understanding of AI models, improve user performance and potentially reduce overreliance on AI predictions. However, this is mostly evaluated by static rather than dynamic measures, and the role of XAI on learning over trials is rarely studied. In this study, we use a context-free sequence prediction task, in which 458 participants predict the next symbol in a fixed sequence (with some noise) over 80 trials. We compare performance with AI and XAI advice against no AI support, and subsequently we test for learning by taking away the AI support after 40 trials (i.e., a reversal study design). Our results show that users learn faster with XAI than with AI without explanations or no AI and are better able to recover in performance from the removal of AI. However, the benefits of XAI on learning are much smaller for more difficult tasks. This work demonstrates the benefits of repeated measures user studies and multilevel modeling to better understand learning processes in XAI. It also shows the potential of AI explanations to help users to learn and poses XAI design suggestions to support learning in human-AI collaboration.
Original language | English |
---|---|
Title of host publication | IUI '25 |
Subtitle of host publication | Proceedings of the 30th International Conference on Intelligent User Interfaces |
Editors | Toby Li, Fabio Paternò, Kaisa Väänänen, Luis Leiva, Davide Spano, Katrien Verbert |
Place of Publication | New York |
Publisher | Association for Computing Machinery, Inc |
Pages | 231-246 |
Number of pages | 16 |
ISBN (Electronic) | 979-8-4007-1306-4 |
DOIs | |
Publication status | Published - 24 Mar 2025 |
Event | 30th Annual ACM Conference on Intelligent User Interfaces 2025 - Cagliari, Italy, Cagliari, Italy Duration: 24 Mar 2025 → 27 Mar 2025 https://iui.acm.org/2025/ |
Conference
Conference | 30th Annual ACM Conference on Intelligent User Interfaces 2025 |
---|---|
Abbreviated title | IUI 2025 |
Country/Territory | Italy |
City | Cagliari |
Period | 24/03/25 → 27/03/25 |
Internet address |
Funding
This work is part of the research programme TEPAIV with project number 612.001.752, which is fnanced by the Dutch Research Council (NWO).
Funders | Funder number |
---|---|
Nederlandse Organisatie voor Wetenschappelijk Onderzoek |
Keywords
- machine learning
- interpretability
- explainability
Fingerprint
Dive into the research topics of 'Benefits of Machine Learning Explanations: Improved Learning in an AI-assisted Sequence Prediction Task'. Together they form a unique fingerprint.Projects
- 1 Finished
-
TEPAIV: TEPAIV
Willemsen, M. C. (Project Manager) & Liang, Y. (Project member)
28/09/18 → 15/05/24
Project: Second tier