Machine Learning Interpretability through Contribution-Value Plots

Research output: Contribution to conferencePaperAcademic

72 Downloads (Pure)

Abstract

The field of explainable artificial intelligence aims to help experts understand complex machine learning models. One key approach is to show the impact of a feature on the model prediction. This helps experts to verify and validate the predictions the model provides. However, many challenges remain open. For example, due to the subjective nature of interpretability, a strict definition of concepts such as the contribution of a feature remains elusive. Different techniques have varying underlying assumptions, which can cause inconsistent and conflicting views. In this work, we introduce Local and Global Contribution-Value plots as a novel approach to visualize feature impact on predictions and the relationship with feature value. We discuss design decisions, and show an exemplary visual analytics implementation that provides new insights into the model.
Original languageEnglish
Number of pages5
DOIs
Publication statusPublished - 8 Dec 2020
EventThe 13th International Symposium on Visual Information Communication and Interaction (VINCI 2020) - Eindhoven, Netherlands
Duration: 8 Dec 202010 Dec 2020
Conference number: 13
http://vinci-conf.org

Conference

ConferenceThe 13th International Symposium on Visual Information Communication and Interaction (VINCI 2020)
Abbreviated titleVINCI
CountryNetherlands
CityEindhoven
Period8/12/2010/12/20
Internet address

Keywords

  • Visualization
  • Explainable AI
  • Machine learning
  • Sensitivity Analysis
  • Partial dependence
  • Feature contribution

Fingerprint Dive into the research topics of 'Machine Learning Interpretability through Contribution-Value Plots'. Together they form a unique fingerprint.

Cite this