Quantifying explainable discrimination and removing illegal discrimination in automated decision making

F. Kamiran, I. Zliobaite, T.G.K. Calders

Onderzoeksoutput: Bijdrage aan tijdschriftTijdschriftartikelAcademicpeer review

78 Citaten (Scopus)
6 Downloads (Pure)

Samenvatting

Recently, the following discrimination-aware classification problem was introduced. Historical data used for supervised learning may contain discrimination, for instance, with respect to gender. The question addressed by discrimination-aware techniques is, given sensitive attribute, how to train discrimination-free classifiers on such historical data that are discriminative, with respect to the given sensitive attribute. Existing techniques that deal with this problem aim at removing all discrimination and do not take into account that part of the discrimination may be explainable by other attributes. For example, in a job application, the education level of a job candidate could be such an explainable attribute. If the data contain many highly educated male candidates and only few highly educated women, a difference in acceptance rates between woman and man does not necessarily reflect gender discrimination, as it could be explained by the different levels of education. Even though selecting on education level would result in more males being accepted, a difference with respect to such a criterion would not be considered to be undesirable, nor illegal. Current state-of-the-art techniques, however, do not take such gender-neutral explanations into account and tend to overreact and actually start reverse discriminating, as we will show in this paper. Therefore, we introduce and analyze the refined notion of conditional non-discrimination in classifier design. We show that some of the differences in decisions across the sensitive groups can be explainable and are hence tolerable. Therefore, we develop methodology for quantifying the explainable discrimination and algorithmic techniques for removing the illegal discrimination when one or more attributes are considered as explanatory. Experimental evaluation on synthetic and real-world classification datasets demonstrates that the new techniques are superior to the old ones in this new context, as they succeed in removing almost exclusively the undesirable discrimination, while leaving the explainable differences unchanged, allowing for differences in decisions as long as they are explainable.
Originele taal-2Engels
Pagina's (van-tot)613-644
TijdschriftKnowledge and Information Systems
Volume35
Nummer van het tijdschrift3
DOI's
StatusGepubliceerd - 2013

Vingerafdruk

Duik in de onderzoeksthema's van 'Quantifying explainable discrimination and removing illegal discrimination in automated decision making'. Samen vormen ze een unieke vingerafdruk.

Citeer dit