Looking deeper into deep learning model: attribution-based explanations of TextCNN

Wenting Xiong, Iftitahu Ni'mah, Juan M. G. Huesca, Werner van Ipenburg, Jan Veldsink, Mykola Pechenizkiy

Research output: Contribution to conferencePaperAcademic

11 Downloads (Pure)

Abstract

Layer-wise Relevance Propagation (LRP) and saliency maps have been recently used to explain the predictions of Deep Learning models, specifically in the domain of text classification. Given different attribution-based explanations to highlight relevant words for a predicted class label, experiments based on word deleting perturbation is a common evaluation method. This word removal approach, however, disregards any linguistic dependencies that may exist between words or phrases in a sentence, which could semantically guide a classifier to a particular prediction. In this paper, we present a feature-based evaluation framework for comparing the two attribution methods on customer reviews (public data sets) and Customer Due Diligence (CDD) extracted reports (corporate data set). Instead of removing words based on the relevance score, we investigate perturbations based on embedded features removal from intermediate layers of Convolutional Neural Networks. Our experimental study is carried out on embedded-word, embedded-document, and embedded-ngrams explanations. Using the proposed framework, we provide a visualization tool to assist analysts in reasoning toward the model's final prediction.
Original languageEnglish
Number of pages9
Publication statusPublished - 8 Nov 2018
EventNIPS 2018 Workshop on Challenges and Opportunities for AI in Financial Services: the Impact of Fairness, Explainability, Accuracy, and Privacy - Montreal, Canada
Duration: 7 Dec 20187 Dec 2018

Conference

ConferenceNIPS 2018 Workshop on Challenges and Opportunities for AI in Financial Services
CountryCanada
CityMontreal
Period7/12/187/12/18
OtherFEAP-AI4Fin 2018

Fingerprint

Linguistics
Labels
Classifiers
Visualization
Neural networks
Deep learning
Experiments

Keywords

  • cs.IR
  • cs.LG
  • stat.ML

Cite this

Xiong, W., Ni'mah, I., Huesca, J. M. G., van Ipenburg, W., Veldsink, J., & Pechenizkiy, M. (2018). Looking deeper into deep learning model: attribution-based explanations of TextCNN. Paper presented at NIPS 2018 Workshop on Challenges and Opportunities for AI in Financial Services, Montreal, Canada.
Xiong, Wenting ; Ni'mah, Iftitahu ; Huesca, Juan M. G. ; van Ipenburg, Werner ; Veldsink, Jan ; Pechenizkiy, Mykola. / Looking deeper into deep learning model : attribution-based explanations of TextCNN. Paper presented at NIPS 2018 Workshop on Challenges and Opportunities for AI in Financial Services, Montreal, Canada.9 p.
@conference{49a7528f853344dbb2698e2d64212506,
title = "Looking deeper into deep learning model: attribution-based explanations of TextCNN",
abstract = "Layer-wise Relevance Propagation (LRP) and saliency maps have been recently used to explain the predictions of Deep Learning models, specifically in the domain of text classification. Given different attribution-based explanations to highlight relevant words for a predicted class label, experiments based on word deleting perturbation is a common evaluation method. This word removal approach, however, disregards any linguistic dependencies that may exist between words or phrases in a sentence, which could semantically guide a classifier to a particular prediction. In this paper, we present a feature-based evaluation framework for comparing the two attribution methods on customer reviews (public data sets) and Customer Due Diligence (CDD) extracted reports (corporate data set). Instead of removing words based on the relevance score, we investigate perturbations based on embedded features removal from intermediate layers of Convolutional Neural Networks. Our experimental study is carried out on embedded-word, embedded-document, and embedded-ngrams explanations. Using the proposed framework, we provide a visualization tool to assist analysts in reasoning toward the model's final prediction.",
keywords = "cs.IR, cs.LG, stat.ML",
author = "Wenting Xiong and Iftitahu Ni'mah and Huesca, {Juan M. G.} and {van Ipenburg}, Werner and Jan Veldsink and Mykola Pechenizkiy",
year = "2018",
month = "11",
day = "8",
language = "English",
note = "NIPS 2018 Workshop on Challenges and Opportunities for AI in Financial Services : the Impact of Fairness, Explainability, Accuracy, and Privacy ; Conference date: 07-12-2018 Through 07-12-2018",

}

Xiong, W, Ni'mah, I, Huesca, JMG, van Ipenburg, W, Veldsink, J & Pechenizkiy, M 2018, 'Looking deeper into deep learning model: attribution-based explanations of TextCNN' Paper presented at NIPS 2018 Workshop on Challenges and Opportunities for AI in Financial Services, Montreal, Canada, 7/12/18 - 7/12/18, .

Looking deeper into deep learning model : attribution-based explanations of TextCNN. / Xiong, Wenting; Ni'mah, Iftitahu; Huesca, Juan M. G.; van Ipenburg, Werner; Veldsink, Jan; Pechenizkiy, Mykola.

2018. Paper presented at NIPS 2018 Workshop on Challenges and Opportunities for AI in Financial Services, Montreal, Canada.

Research output: Contribution to conferencePaperAcademic

TY - CONF

T1 - Looking deeper into deep learning model

T2 - attribution-based explanations of TextCNN

AU - Xiong, Wenting

AU - Ni'mah, Iftitahu

AU - Huesca, Juan M. G.

AU - van Ipenburg, Werner

AU - Veldsink, Jan

AU - Pechenizkiy, Mykola

PY - 2018/11/8

Y1 - 2018/11/8

N2 - Layer-wise Relevance Propagation (LRP) and saliency maps have been recently used to explain the predictions of Deep Learning models, specifically in the domain of text classification. Given different attribution-based explanations to highlight relevant words for a predicted class label, experiments based on word deleting perturbation is a common evaluation method. This word removal approach, however, disregards any linguistic dependencies that may exist between words or phrases in a sentence, which could semantically guide a classifier to a particular prediction. In this paper, we present a feature-based evaluation framework for comparing the two attribution methods on customer reviews (public data sets) and Customer Due Diligence (CDD) extracted reports (corporate data set). Instead of removing words based on the relevance score, we investigate perturbations based on embedded features removal from intermediate layers of Convolutional Neural Networks. Our experimental study is carried out on embedded-word, embedded-document, and embedded-ngrams explanations. Using the proposed framework, we provide a visualization tool to assist analysts in reasoning toward the model's final prediction.

AB - Layer-wise Relevance Propagation (LRP) and saliency maps have been recently used to explain the predictions of Deep Learning models, specifically in the domain of text classification. Given different attribution-based explanations to highlight relevant words for a predicted class label, experiments based on word deleting perturbation is a common evaluation method. This word removal approach, however, disregards any linguistic dependencies that may exist between words or phrases in a sentence, which could semantically guide a classifier to a particular prediction. In this paper, we present a feature-based evaluation framework for comparing the two attribution methods on customer reviews (public data sets) and Customer Due Diligence (CDD) extracted reports (corporate data set). Instead of removing words based on the relevance score, we investigate perturbations based on embedded features removal from intermediate layers of Convolutional Neural Networks. Our experimental study is carried out on embedded-word, embedded-document, and embedded-ngrams explanations. Using the proposed framework, we provide a visualization tool to assist analysts in reasoning toward the model's final prediction.

KW - cs.IR

KW - cs.LG

KW - stat.ML

UR - https://arxiv.org/abs/1811.03970

M3 - Paper

ER -

Xiong W, Ni'mah I, Huesca JMG, van Ipenburg W, Veldsink J, Pechenizkiy M. Looking deeper into deep learning model: attribution-based explanations of TextCNN. 2018. Paper presented at NIPS 2018 Workshop on Challenges and Opportunities for AI in Financial Services, Montreal, Canada.