Learning with actionable attributes: Attention -- boundary cases!

I. Zliobaite, M. Pechenizkiy

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

4 Citations (Scopus)
1 Downloads (Pure)

Abstract

Traditional supervised learning assumes that instances are described by observable attributes. The goal is to learn to predict the labels for unseen instances. In many real world applications the values of some attributes are not only observable, but can be proactively chosen by a decision maker. Furthermore, in some of such applications the decision maker is interested not only to generate accurate predictions, but to maximize the probability of the desired outcome. For example, a direct marketing manager can choose the color of an envelope (actionable attribute), in which the offer is sent to a client, hoping that the right choice will result in a positive response with a higher probability. We study how to learn to choose the value of an actionable attribute in order to maximize the probability of a desired outcome in supervised learning settings. We emphasize that not all instances are equally sensitive to change in actions. Accurate choice of an action is essential for those instances, which are on a borderline (e.g. do not have a strong opinion). We formulate three supervised learning approaches to select the value of an actionable attribute at an instance level. We focus the learning process to the borderline cases. The potential of the underlying ideas is demonstrated with synthetic examples and a case study with a real dataset.
Original languageEnglish
Title of host publicationProceedings 2010 IEEE International Conference on Data Mining Workshops (ICDWM, Sydney, Australia, December 13, 2010)
PublisherIEEE Computer Society
Pages1021-1028
ISBN (Print)978-1-4244-9244-2
DOIs
Publication statusPublished - 2011

Fingerprint Dive into the research topics of 'Learning with actionable attributes: Attention -- boundary cases!'. Together they form a unique fingerprint.

Cite this