Algorithmic recourse under imperfect causal knowledge: a probabilistic
approach
- URL: http://arxiv.org/abs/2006.06831v3
- Date: Fri, 23 Oct 2020 11:31:38 GMT
- Title: Algorithmic recourse under imperfect causal knowledge: a probabilistic
approach
- Authors: Amir-Hossein Karimi, Julius von K\"ugelgen, Bernhard Sch\"olkopf,
Isabel Valera
- Abstract summary: We show that it is impossible to guarantee recourse without access to the true structural equations.
We propose two probabilistic approaches to select optimal actions that achieve recourse with high probability given limited causal knowledge.
- Score: 15.124107808802703
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent work has discussed the limitations of counterfactual explanations to
recommend actions for algorithmic recourse, and argued for the need of taking
causal relationships between features into consideration. Unfortunately, in
practice, the true underlying structural causal model is generally unknown. In
this work, we first show that it is impossible to guarantee recourse without
access to the true structural equations. To address this limitation, we propose
two probabilistic approaches to select optimal actions that achieve recourse
with high probability given limited causal knowledge (e.g., only the causal
graph). The first captures uncertainty over structural equations under additive
Gaussian noise, and uses Bayesian model averaging to estimate the
counterfactual distribution. The second removes any assumptions on the
structural equations by instead computing the average effect of recourse
actions on individuals similar to the person who seeks recourse, leading to a
novel subpopulation-based interventional notion of recourse. We then derive a
gradient-based procedure for selecting optimal recourse actions, and
empirically show that the proposed approaches lead to more reliable
recommendations under imperfect causal knowledge than non-probabilistic
baselines.
Related papers
- Non-Convex Robust Hypothesis Testing using Sinkhorn Uncertainty Sets [18.46110328123008]
We present a new framework to address the non-robust hypothesis testing problem.
The goal is to seek the optimal detector that minimizes the maximum numerical risk.
arXiv Detail & Related papers (2024-03-21T20:29:43Z) - Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - Distributionally Robust Recourse Action [12.139222986297263]
A recourse action aims to explain a particular algorithmic decision by showing one specific way in which the instance could be modified to receive an alternate outcome.
We propose the Distributionally Robust Recourse Action (DiRRAc) framework, which generates a recourse action that has a high probability of being valid under a mixture of model shifts.
arXiv Detail & Related papers (2023-02-22T08:52:01Z) - Model-based Causal Bayesian Optimization [78.120734120667]
We propose model-based causal Bayesian optimization (MCBO)
MCBO learns a full system model instead of only modeling intervention-reward pairs.
Unlike in standard Bayesian optimization, our acquisition function cannot be evaluated in closed form.
arXiv Detail & Related papers (2022-11-18T14:28:21Z) - On the Trade-Off between Actionable Explanations and the Right to be
Forgotten [21.26254644739585]
We study the problem of recourse invalidation in the context of data deletion requests.
We show that the removal of as little as 2 data instances from the training set can invalidate up to 95 percent of all recourses output by popular state-of-the-art algorithms.
arXiv Detail & Related papers (2022-08-30T10:35:32Z) - Data-Driven Influence Functions for Optimization-Based Causal Inference [105.5385525290466]
We study a constructive algorithm that approximates Gateaux derivatives for statistical functionals by finite differencing.
We study the case where probability distributions are not known a priori but need to be estimated from data.
arXiv Detail & Related papers (2022-08-29T16:16:22Z) - Principled Knowledge Extrapolation with GANs [92.62635018136476]
We study counterfactual synthesis from a new perspective of knowledge extrapolation.
We show that an adversarial game with a closed-form discriminator can be used to address the knowledge extrapolation problem.
Our method enjoys both elegant theoretical guarantees and superior performance in many scenarios.
arXiv Detail & Related papers (2022-05-21T08:39:42Z) - A Causal Perspective on Meaningful and Robust Algorithmic Recourse [1.0804061924593267]
In general ML models do not predict well in interventional distributions.
We propose meaningful algorithmic recourse (MAR) that only recommends actions that improve both prediction and target.
arXiv Detail & Related papers (2021-07-16T12:37:54Z) - Algorithmic Recourse in Partially and Fully Confounded Settings Through
Bounding Counterfactual Effects [0.6299766708197883]
Algorithmic recourse aims to provide actionable recommendations to individuals to obtain a more favourable outcome from an automated decision-making system.
Existing methods compute the effect of recourse actions using a causal model learnt from data under the assumption of no hidden confounding and modelling assumptions such as additive noise.
We propose an alternative approach for discrete random variables which relaxes these assumptions and allows for unobserved confounding and arbitrary structural equations.
arXiv Detail & Related papers (2021-06-22T15:07:49Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Excursion Search for Constrained Bayesian Optimization under a Limited
Budget of Failures [62.41541049302712]
We propose a novel decision maker grounded in control theory that controls the amount of risk we allow in the search as a function of a given budget of failures.
Our algorithm uses the failures budget more efficiently in a variety of optimization experiments, and generally achieves lower regret, than state-of-the-art methods.
arXiv Detail & Related papers (2020-05-15T09:54:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.