Structural Causal Models Reveal Confounder Bias in Linear Program
Modelling
- URL: http://arxiv.org/abs/2105.12697v6
- Date: Tue, 7 Nov 2023 12:38:58 GMT
- Title: Structural Causal Models Reveal Confounder Bias in Linear Program
Modelling
- Authors: Matej Ze\v{c}evi\'c and Devendra Singh Dhami and Kristian Kersting
- Abstract summary: We investigate the question of whether the phenomenon might be more general in nature, that is, adversarial-style attacks outside classical classification tasks.
Specifically, we consider the base class of Linear Programs (LPs)
We show the direct influence of the Structural Causal Model (SCM) onto the subsequent LP optimization, which ultimately exposes a notion of confounding in LPs.
- Score: 26.173103098250678
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The recent years have been marked by extended research on adversarial
attacks, especially on deep neural networks. With this work we intend on posing
and investigating the question of whether the phenomenon might be more general
in nature, that is, adversarial-style attacks outside classical classification
tasks. Specifically, we investigate optimization problems as they constitute a
fundamental part of modern AI research. To this end, we consider the base class
of optimizers namely Linear Programs (LPs). On our initial attempt of a na\"ive
mapping between the formalism of adversarial examples and LPs, we quickly
identify the key ingredients missing for making sense of a reasonable notion of
adversarial examples for LPs. Intriguingly, the formalism of Pearl's notion to
causality allows for the right description of adversarial like examples for
LPs. Characteristically, we show the direct influence of the Structural Causal
Model (SCM) onto the subsequent LP optimization, which ultimately exposes a
notion of confounding in LPs (inherited by said SCM) that allows for
adversarial-style attacks. We provide both the general proof formally alongside
existential proofs of such intriguing LP-parameterizations based on SCM for
three combinatorial problems, namely Linear Assignment, Shortest Path and a
real world problem of energy systems.
Related papers
- Make LLMs better zero-shot reasoners: Structure-orientated autonomous reasoning [52.83539473110143]
We introduce a novel structure-oriented analysis method to help Large Language Models (LLMs) better understand a question.
To further improve the reliability in complex question-answering tasks, we propose a multi-agent reasoning system, Structure-oriented Autonomous Reasoning Agents (SARA)
Extensive experiments verify the effectiveness of the proposed reasoning system. Surprisingly, in some cases, the system even surpasses few-shot methods.
arXiv Detail & Related papers (2024-10-18T05:30:33Z) - Towards an Understanding of Stepwise Inference in Transformers: A
Synthetic Graph Navigation Model [19.826983068662106]
We propose to study autoregressive Transformer models on a synthetic task that embodies the multi-step nature of problems where stepwise inference is generally most useful.
Despite is simplicity, we find we can empirically reproduce and analyze several phenomena observed at scale.
arXiv Detail & Related papers (2024-02-12T16:25:47Z) - Deep Backtracking Counterfactuals for Causally Compliant Explanations [57.94160431716524]
We introduce a practical method called deep backtracking counterfactuals (DeepBC) for computing backtracking counterfactuals in structural causal models.
As a special case, our formulation reduces to methods in the field of counterfactual explanations.
arXiv Detail & Related papers (2023-10-11T17:11:10Z) - Mitigating Prior Errors in Causal Structure Learning: Towards LLM driven
Prior Knowledge [17.634793921251777]
We aim to tackle erroneous prior causal statements from Large Language Models (LLM)
As a pioneer attempt, we propose a BN learning strategy resilient to prior errors without need of human intervention.
Specifically, we highlight its substantial ability to resist order-reversed errors while maintaining the majority of correct prior knowledge.
arXiv Detail & Related papers (2023-06-12T11:24:48Z) - Causal Triplet: An Open Challenge for Intervention-centric Causal
Representation Learning [98.78136504619539]
Causal Triplet is a causal representation learning benchmark featuring visually more complex scenes.
We show that models built with the knowledge of disentangled or object-centric representations significantly outperform their distributed counterparts.
arXiv Detail & Related papers (2023-01-12T17:43:38Z) - Rediscovering Argumentation Principles Utilizing Collective Attacks [26.186171927678874]
We extend the principle-based approach to Argumentation Frameworks with Collective Attacks (SETAFs)
Our analysis shows that investigating principles based on decomposing the given SETAF (e.g. directionality or SCC-recursiveness) poses additional challenges in comparison to usual AFs.
arXiv Detail & Related papers (2022-05-06T11:41:23Z) - An Intermediate-level Attack Framework on The Basis of Linear Regression [89.85593878754571]
This paper substantially extends our work published at ECCV, in which an intermediate-level attack was proposed to improve the transferability of some baseline adversarial examples.
We advocate to establish a direct linear mapping from the intermediate-level discrepancies (between adversarial features and benign features) to classification prediction loss of the adversarial example.
We show that 1) a variety of linear regression models can all be considered in order to establish the mapping, 2) the magnitude of the finally obtained intermediate-level discrepancy is linearly correlated with adversarial transferability, and 3) further boost of the performance can be achieved by performing multiple runs of the baseline attack with
arXiv Detail & Related papers (2022-03-21T03:54:53Z) - Deep Hierarchy in Bandits [51.22833900944146]
Mean rewards of actions are often correlated.
To maximize statistical efficiency, it is important to leverage these correlations when learning.
We formulate a bandit variant of this problem where the correlations of mean action rewards are represented by a hierarchical Bayesian model.
arXiv Detail & Related papers (2022-02-03T08:15:53Z) - On the Connections between Counterfactual Explanations and Adversarial
Examples [14.494463243702908]
We make one of the first attempts at formalizing the connections between counterfactual explanations and adversarial examples.
Our analysis demonstrates that several popular counterfactual explanation and adversarial example generation methods are equivalent.
We empirically validate our theoretical findings using extensive experimentation with synthetic and real world datasets.
arXiv Detail & Related papers (2021-06-18T08:22:24Z) - Causal Expectation-Maximisation [70.45873402967297]
We show that causal inference is NP-hard even in models characterised by polytree-shaped graphs.
We introduce the causal EM algorithm to reconstruct the uncertainty about the latent variables from data about categorical manifest variables.
We argue that there appears to be an unnoticed limitation to the trending idea that counterfactual bounds can often be computed without knowledge of the structural equations.
arXiv Detail & Related papers (2020-11-04T10:25:13Z) - Case-Based Abductive Natural Language Inference [4.726777092009554]
Case-Based Abductive Natural Language Inference (CB-ANLI)
Case-Based Abductive Natural Language Inference (CB-ANLI)
arXiv Detail & Related papers (2020-09-30T09:50:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.