Machine learning and data mining techniques are effective tools to classify large amounts of data. But they tend to preserve any inherent bias in the data, for example, with regards to gender or race. Removing such bias from data or the learned representations is quite challenging. In this paper we study a geometric problem which models a possible approach for bias removal. Our input is a set of points P in Euclidean space Rd and each point is labeled with k binary-valued properties. A priori we assume that it is "easy"to classify the data according to each property. Our goal is to obstruct the classification according to one property by a suitable projection to a lower-dimensional Euclidean space Rm (m < d), while classification according to all other properties remains easy. What it means for classification to be easy depends on the classification model used. We first consider classification by linear separability as employed by support vector machines. We use Kirchberger's Theorem to show that, under certain conditions, a simple projection to Rd1 suffices to eliminate the linear separability of one of the properties whilst maintaining the linear separability of the other properties. We also study the problem of maximizing the linear "inseparability"of the chosen property. Second, we consider more complex forms of separability and prove a connection between the number of projections required to obstruct classification and the Helly-type properties of such separabilities.
|Title of host publication||46th International Symposium on Mathematical Foundations of Computer Science, MFCS 2021|
|Editors||Filippo Bonchi, Simon J. Puglisi|
|Publisher||Schloss Dagstuhl - Leibniz-Zentrum für Informatik|
|Publication status||Published - 1 Aug 2021|
|Event||46th International Symposium on Mathematical Foundations of Computer Science, MFCS 2021 - Tallinn, Estonia|
Duration: 23 Aug 2021 → 27 Aug 2021
|Name||Leibniz International Proceedings in Informatics, LIPIcs|
|Conference||46th International Symposium on Mathematical Foundations of Computer Science, MFCS 2021|
|Period||23/08/21 → 27/08/21|
Bibliographical noteFunding Information:
Supported by the Dutch Research Council (NWO); 612.001.651.
© Pantea Haghighatkhah, Wouter Meulemans, Bettina Speckmann, Jérôme Urhausen, and Kevin Verbeek; licensed under Creative Commons License CC-BY 4.0 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021).
- Models of learning