Structural causal models reveal confounder bias in linear program modelling

Matej Zečević, Devendra Singh Dhami (Corresponding author), Kristian Kersting

Research output: Contribution to journalArticleAcademicpeer-review

14 Downloads (Pure)

Abstract

The recent years have been marked by extended research on adversarial attacks, especially on deep neural networks. With this work we intend on posing and investigating the question of whether the phenomenon might be more general in nature, that is, adversarial-style attacks outside classical classification tasks. Specifically, we investigate optimization problems as they constitute a fundamental part of modern AI research. To this end, we consider the base class of optimizers namely Linear Programs (LPs). On our initial attempt of a naïve mapping between the formalism of adversarial examples and LPs, we quickly identify the key ingredients missing for making sense of a reasonable notion of adversarial examples for LPs. Intriguingly, the formalism of Pearl’s notion to causality allows for the right description of adversarial like examples for LPs. Characteristically, we show the direct influence of the Structural Causal Model (SCM) onto the subsequent LP optimization, which ultimately exposes a notion of confounding in LPs (inherited by said SCM) that allows for adversarial-style attacks. We provide both the general proof formally alongside existential proofs of such intriguing LP-parameterizations based on SCM for three combinatorial problems, namely Linear Assignment, Shortest Path and a real world problem of energy systems.

Original languageEnglish
Pages (from-to)1329-1349
Number of pages21
JournalMachine Learning
Volume113
Issue number3
DOIs
Publication statusPublished - Mar 2024

Keywords

  • Adversarial-style examples
  • Causality
  • Linear programming

Fingerprint

Dive into the research topics of 'Structural causal models reveal confounder bias in linear program modelling'. Together they form a unique fingerprint.

Cite this