A Causal Perspective on Meaningful and Robust Algorithmic Recourse
- URL: http://arxiv.org/abs/2107.07853v1
- Date: Fri, 16 Jul 2021 12:37:54 GMT
- Title: A Causal Perspective on Meaningful and Robust Algorithmic Recourse
- Authors: Gunnar K\"onig, Timo Freiesleben, Moritz Grosse-Wentrup
- Abstract summary: In general ML models do not predict well in interventional distributions.
We propose meaningful algorithmic recourse (MAR) that only recommends actions that improve both prediction and target.
- Score: 1.0804061924593267
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Algorithmic recourse explanations inform stakeholders on how to act to revert
unfavorable predictions. However, in general ML models do not predict well in
interventional distributions. Thus, an action that changes the prediction in
the desired way may not lead to an improvement of the underlying target. Such
recourse is neither meaningful nor robust to model refits. Extending the work
of Karimi et al. (2021), we propose meaningful algorithmic recourse (MAR) that
only recommends actions that improve both prediction and target. We justify
this selection constraint by highlighting the differences between model audit
and meaningful, actionable recourse explanations. Additionally, we introduce a
relaxation of MAR called effective algorithmic recourse (EAR), which, under
certain assumptions, yields meaningful recourse by only allowing interventions
on causes of the target.
Related papers
- Calibrated Probabilistic Forecasts for Arbitrary Sequences [58.54729945445505]
Real-world data streams can change unpredictably due to distribution shifts, feedback loops and adversarial actors.
We present a forecasting framework ensuring valid uncertainty estimates regardless of how data evolves.
arXiv Detail & Related papers (2024-09-27T21:46:42Z) - Prediction-Oriented Bayesian Active Learning [51.426960808684655]
Expected predictive information gain (EPIG) is an acquisition function that measures information gain in the space of predictions rather than parameters.
EPIG leads to stronger predictive performance compared with BALD across a range of datasets and models.
arXiv Detail & Related papers (2023-04-17T10:59:57Z) - Distributionally Robust Recourse Action [12.139222986297263]
A recourse action aims to explain a particular algorithmic decision by showing one specific way in which the instance could be modified to receive an alternate outcome.
We propose the Distributionally Robust Recourse Action (DiRRAc) framework, which generates a recourse action that has a high probability of being valid under a mixture of model shifts.
arXiv Detail & Related papers (2023-02-22T08:52:01Z) - Probabilistically Robust Recourse: Navigating the Trade-offs between
Costs and Robustness in Algorithmic Recourse [34.39887495671287]
We propose an objective function which simultaneously minimizes the gap between the achieved (resulting) and desired recourse invalidation rates.
We develop novel theoretical results to characterize the recourse invalidation rates corresponding to any given instance.
Experimental evaluation with multiple real world datasets demonstrates the efficacy of the proposed framework.
arXiv Detail & Related papers (2022-03-13T21:39:24Z) - Algorithmic Recourse in Partially and Fully Confounded Settings Through
Bounding Counterfactual Effects [0.6299766708197883]
Algorithmic recourse aims to provide actionable recommendations to individuals to obtain a more favourable outcome from an automated decision-making system.
Existing methods compute the effect of recourse actions using a causal model learnt from data under the assumption of no hidden confounding and modelling assumptions such as additive noise.
We propose an alternative approach for discrete random variables which relaxes these assumptions and allows for unobserved confounding and arbitrary structural equations.
arXiv Detail & Related papers (2021-06-22T15:07:49Z) - When Does Uncertainty Matter?: Understanding the Impact of Predictive
Uncertainty in ML Assisted Decision Making [68.19284302320146]
We carry out user studies to assess how people with differing levels of expertise respond to different types of predictive uncertainty.
We found that showing posterior predictive distributions led to smaller disagreements with the ML model's predictions.
This suggests that posterior predictive distributions can potentially serve as useful decision aids which should be used with caution and take into account the type of distribution and the expertise of the human.
arXiv Detail & Related papers (2020-11-12T02:23:53Z) - Estimating Generalization under Distribution Shifts via Domain-Invariant
Representations [75.74928159249225]
We use a set of domain-invariant predictors as a proxy for the unknown, true target labels.
The error of the resulting risk estimate depends on the target risk of the proxy model.
arXiv Detail & Related papers (2020-07-06T17:21:24Z) - Counterfactual Predictions under Runtime Confounding [74.90756694584839]
We study the counterfactual prediction task in the setting where all relevant factors are captured in the historical data.
We propose a doubly-robust procedure for learning counterfactual prediction models in this setting.
arXiv Detail & Related papers (2020-06-30T15:49:05Z) - Algorithmic recourse under imperfect causal knowledge: a probabilistic
approach [15.124107808802703]
We show that it is impossible to guarantee recourse without access to the true structural equations.
We propose two probabilistic approaches to select optimal actions that achieve recourse with high probability given limited causal knowledge.
arXiv Detail & Related papers (2020-06-11T21:19:07Z) - Explaining the Behavior of Black-Box Prediction Algorithms with Causal
Learning [9.279259759707996]
Causal approaches to post-hoc explainability for black-box prediction models have become increasingly popular.
We learn causal graphical representations that allow for arbitrary unmeasured confounding among features.
Our approach is motivated by a counterfactual theory of causal explanation wherein good explanations point to factors that are "difference-makers" in an interventionist sense.
arXiv Detail & Related papers (2020-06-03T19:02:34Z) - Learning to Predict Error for MRI Reconstruction [67.76632988696943]
We demonstrate that predictive uncertainty estimated by the current methods does not highly correlate with prediction error.
We propose a novel method that estimates the target labels and magnitude of the prediction error in two steps.
arXiv Detail & Related papers (2020-02-13T15:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.