Characterizing Data Scientists' Mental Models of Local Feature Importance

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

96 Downloads (Pure)

Abstract

Feature importance is an approach that helps to explain machine learning model predictions. It works through assigning importance scores to input features of a particular model. Different techniques exist to derive these scores, with widely varying underlying assumptions of what importance means. Little research has been done to verify whether these assumptions match the expectations of the target user, which is imperative to ensure that feature importance values are not misinterpreted. In this work, we explore data scientists’ mental models of (local) feature importance and compare these with the conceptual models of the techniques. We first identify several properties of local feature importance techniques that could potentially lead to misinterpretations. Subsequently, we explore the expectations data scientists have about local feature importance through an exploratory (qualitative and quantitative) survey of 34 data scientists in industry. We compare the identified expectations to the theory and assumptions behind the techniques and find that the two are not (always) in agreement.
Original languageEnglish
Title of host publicationNordiCHI '22: Nordic Human-Computer Interaction Conference
PublisherAssociation for Computing Machinery, Inc
Number of pages12
ISBN (Electronic)978-1-4503-9699-8
DOIs
Publication statusPublished - 8 Oct 2022
Event12th Nordic Conference on Human-Computer Interaction: Participative Computing for Sustainable Futures, NordiCHI 2022 - Aarhus, Denmark
Duration: 8 Oct 202212 Oct 2022
Conference number: 12

Conference

Conference12th Nordic Conference on Human-Computer Interaction: Participative Computing for Sustainable Futures, NordiCHI 2022
Abbreviated titleNordiCHI 2022
Country/TerritoryDenmark
CityAarhus
Period8/10/2212/10/22

Keywords

  • Explainable AI
  • Feature importance
  • Interpretability

Fingerprint

Dive into the research topics of 'Characterizing Data Scientists' Mental Models of Local Feature Importance'. Together they form a unique fingerprint.

Cite this