Explainable AI for Sequential Decision Making: Report from Dagstuhl Seminar 24372

Hendrik Baier, Mark T. Keane, Sarath Sreedharan, Silvia Tulli, Abhinav Verma, Stylianos Loukas Vasileiou

Research output: Contribution to journalArticleAcademic

15 Downloads (Pure)

Abstract

As more and more AI applications have become ubiquitous in our lives, the research area of explainable AI (XAI) has rapidly developed, with goals such as enabling transparency, enhancing collaboration, and increasing trust in AI. However, the focus of XAI to date has largely been on explaining the input-output mappings of "black box" models like neural networks, which have been seen as the central problem for the explainability of AI systems. The challenge of explaining intelligent behavior that extends over time, such as that of robots collaborating with humans or software agents engaged in complex ongoing tasks, has only recently gained attention. We may have AIs that can beat us in Go, but can they teach us how to play?
This Dagstuhl Seminar brought together academic researchers and industry experts from communities such as reinforcement learning, planning, game AI, robotics, and cognitive science to discuss their work on explainability in sequential decision-making contexts. The seminar aimed to move towards a shared understanding of the field and develop a common roadmap for moving it forward. This report documents the program and its results.
Original languageEnglish
Article number9
Pages (from-to)67-103
Number of pages37
JournalDagstuhl Reports
Volume14
Issue number9
DOIs
Publication statusPublished - 4 Apr 2025

Fingerprint

Dive into the research topics of 'Explainable AI for Sequential Decision Making: Report from Dagstuhl Seminar 24372'. Together they form a unique fingerprint.

Cite this