Abstract
Fraud detection is a difficult problem that can benefit from predictive modeling. However, the verification of a prediction is challenging; for a single insurance policy, the model only provides a prediction score. We present a case study where we reflect on different instance-level model explanation techniques to aid a fraud detection team in their work. To this end, we designed two novel dashboards combining various state-of-the-art explanation techniques. These enable the domain expert to analyze and understand predictions, dramatically speeding up the process of filtering potential fraud cases. Finally, we discuss the lessons learned and outline open research issues.
Original language | English |
---|---|
Pages | 28-33 |
Number of pages | 6 |
Publication status | Published - 19 Jun 2018 |
Event | 2018 ICML Workshop on Human Interpretability in Machine Learning (WHI 2018) - Stockholmsmässan, Stockholm, Sweden Duration: 14 Jul 2018 → 14 Oct 2018 Conference number: 3 https://sites.google.com/view/whi2018 |
Workshop
Workshop | 2018 ICML Workshop on Human Interpretability in Machine Learning (WHI 2018) |
---|---|
Abbreviated title | WHI 2018 |
Country/Territory | Sweden |
City | Stockholm |
Period | 14/07/18 → 14/10/18 |
Other | Part of W17 of IJCAI-ECAI 2018 |
Internet address |
Keywords
- Interpretability
- Explanation
- Machine learning
- Sensitivity analysis
- Local rule extraction
- Instance-level explanations
- Fraud detection
- Case study