Performative Validity of Recourse Explanations
- URL: http://arxiv.org/abs/2506.15366v1
- Date: Wed, 18 Jun 2025 11:34:15 GMT
- Title: Performative Validity of Recourse Explanations
- Authors: Gunnar König, Hidde Fokkema, Timo Freiesleben, Celestine Mendler-Dünner, Ulrike on Luxburg,
- Abstract summary: We characterize the conditions under which recourse explanations remain valid under performativity.<n>A key finding is that recourse actions may become invalid if they are influenced by or if they intervene on non-causal variables.
- Score: 11.237217706303175
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When applicants get rejected by an algorithmic decision system, recourse explanations provide actionable suggestions for how to change their input features to get a positive evaluation. A crucial yet overlooked phenomenon is that recourse explanations are performative: When many applicants act according to their recommendations, their collective behavior may change statistical regularities in the data and, once the model is refitted, also the decision boundary. Consequently, the recourse algorithm may render its own recommendations invalid, such that applicants who make the effort of implementing their recommendations may be rejected again when they reapply. In this work, we formally characterize the conditions under which recourse explanations remain valid under performativity. A key finding is that recourse actions may become invalid if they are influenced by or if they intervene on non-causal variables. Based on our analysis, we caution against the use of standard counterfactual explanations and causal recourse methods, and instead advocate for recourse methods that recommend actions exclusively on causal variables.
Related papers
- Feature Responsiveness Scores: Model-Agnostic Explanations for Recourse [7.730963708373791]
Consumer protection rules require companies to explain predictions to decision subjects.<n>We show how these practices can undermine consumers by highlighting features that would not lead to an improved outcome.<n>We propose to address these issues by highlighting features based on their responsiveness score.
arXiv Detail & Related papers (2024-10-29T23:37:49Z) - Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization [60.176008034221404]
Direct Preference Optimization (DPO) and its variants are increasingly used for aligning language models with human preferences.<n>Prior work has observed that the likelihood of preferred responses often decreases during training.<n>We demonstrate that likelihood displacement can be catastrophic, shifting probability mass from preferred responses to responses with an opposite meaning.
arXiv Detail & Related papers (2024-10-11T14:22:44Z) - CSRec: Rethinking Sequential Recommendation from A Causal Perspective [25.69446083970207]
The essence of sequential recommender systems (RecSys) lies in understanding how users make decisions.
We propose a novel formulation of sequential recommendation, termed Causal Sequential Recommendation (CSRec)
CSRec aims to predict the probability of a recommended item's acceptance within a sequential context and backtrack how current decisions are made.
arXiv Detail & Related papers (2024-08-23T23:19:14Z) - Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - Algorithmic Recourse with Missing Values [11.401006371457436]
This paper proposes a new framework of algorithmic recourse (AR) that works even in the presence of missing values.
AR aims to provide a recourse action for altering the undesired prediction result given by a classifier.
Experimental results demonstrated the efficacy of our method in the presence of missing values compared to the baselines.
arXiv Detail & Related papers (2023-04-28T03:22:48Z) - Debiasing Recommendation by Learning Identifiable Latent Confounders [49.16119112336605]
Confounding bias arises due to the presence of unmeasured variables that can affect both a user's exposure and feedback.
Existing methods either (1) make untenable assumptions about these unmeasured variables or (2) directly infer latent confounders from users' exposure.
We propose a novel method, i.e., identifiable deconfounder (iDCF), which leverages a set of proxy variables to resolve the aforementioned non-identification issue.
arXiv Detail & Related papers (2023-02-10T05:10:26Z) - On the Trade-Off between Actionable Explanations and the Right to be
Forgotten [21.26254644739585]
We study the problem of recourse invalidation in the context of data deletion requests.
We show that the removal of as little as 2 data instances from the training set can invalidate up to 95 percent of all recourses output by popular state-of-the-art algorithms.
arXiv Detail & Related papers (2022-08-30T10:35:32Z) - From Explanation to Recommendation: Ethical Standards for Algorithmic
Recourse [0.0]
We argue that recourse should be viewed as a recommendation problem, not an explanation problem.
We illustrate by considering the case of diversity constraints on algorithmic recourse.
arXiv Detail & Related papers (2022-05-30T20:09:42Z) - Sayer: Using Implicit Feedback to Optimize System Policies [63.992191765269396]
We develop a methodology that leverages implicit feedback to evaluate and train new system policies.
Sayer builds on two ideas from reinforcement learning to leverage data collected by an existing policy.
We show that Sayer can evaluate arbitrary policies accurately, and train new policies that outperform the production policies.
arXiv Detail & Related papers (2021-10-28T04:16:56Z) - Algorithmic Recourse in Partially and Fully Confounded Settings Through
Bounding Counterfactual Effects [0.6299766708197883]
Algorithmic recourse aims to provide actionable recommendations to individuals to obtain a more favourable outcome from an automated decision-making system.
Existing methods compute the effect of recourse actions using a causal model learnt from data under the assumption of no hidden confounding and modelling assumptions such as additive noise.
We propose an alternative approach for discrete random variables which relaxes these assumptions and allows for unobserved confounding and arbitrary structural equations.
arXiv Detail & Related papers (2021-06-22T15:07:49Z) - Learning "What-if" Explanations for Sequential Decision-Making [92.8311073739295]
Building interpretable parameterizations of real-world decision-making on the basis of demonstrated behavior is essential.
We propose learning explanations of expert decisions by modeling their reward function in terms of preferences with respect to "what if" outcomes.
We highlight the effectiveness of our batch, counterfactual inverse reinforcement learning approach in recovering accurate and interpretable descriptions of behavior.
arXiv Detail & Related papers (2020-07-02T14:24:17Z) - Explaining reputation assessments [6.87724532311602]
We propose an approach to explain the rationale behind assessments from quantitative reputation models.
Our approach adapts, extends and combines existing approaches for explaining decisions made using multi-attribute decision models.
arXiv Detail & Related papers (2020-06-15T23:19:35Z) - Fairness-Aware Explainable Recommendation over Knowledge Graphs [73.81994676695346]
We analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups.
We show that inactive users may be more susceptible to receiving unsatisfactory recommendations, due to insufficient training data for the inactive users.
We propose a fairness constrained approach via re-ranking to mitigate this problem in the context of explainable recommendation over knowledge graphs.
arXiv Detail & Related papers (2020-06-03T05:04:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.