Improving understandability of feature contributions in model-agnostic explainable AI tools

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

18 Citations (Scopus)
143 Downloads (Pure)

Abstract

Model-agnostic explainable AI tools explain their predictions by means of ’local’ feature contributions. We empirically investigate two potential improvements over current approaches. The first one is to always present feature contributions in terms of the contribution to the outcome that is perceived as positive by the user (“positive framing”). The second one is to add “semantic labeling”, that explains the directionality of each feature contribution (“this feature leads to +5% eligibility”), reducing additional cognitive processing steps. In a user study, participants evaluated the understandability of explanations for different framing and labeling conditions for loan applications and music recommendations. We found that positive framing improves understandability even when the prediction is negative. Additionally, adding semantic labels eliminates any framing effects on understandability, with positive labels outperforming negative labels. We implemented our suggestions in a package ArgueView[11].
Original languageEnglish
Title of host publicationCHI 2022 - Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems
Place of PublicationNew York
PublisherAssociation for Computing Machinery, Inc
Number of pages9
ISBN (Electronic)978-1-4503-9157-3
DOIs
Publication statusPublished - 29 Apr 2022
Event2022 Conference on Human Factors in Computing Systems, CHI 2022 - Virtual, Online, United States
Duration: 30 Apr 20225 May 2022

Conference

Conference2022 Conference on Human Factors in Computing Systems, CHI 2022
Country/TerritoryUnited States
CityVirtual, Online
Period30/04/225/05/22

Keywords

  • argumentation
  • explanations
  • interpretable machine learning
  • natural language

Fingerprint

Dive into the research topics of 'Improving understandability of feature contributions in model-agnostic explainable AI tools'. Together they form a unique fingerprint.

Cite this