The Risks of Recourse in Binary Classification
- URL: http://arxiv.org/abs/2306.00497v2
- Date: Fri, 1 Mar 2024 13:15:25 GMT
- Title: The Risks of Recourse in Binary Classification
- Authors: Hidde Fokkema, Damien Garreau, Tim van Erven
- Abstract summary: We study whether providing algorithmic recourse is beneficial or harmful at the population level.
We find that there are many plausible scenarios in which providing recourse turns out to be harmful.
We conclude that the current concept of algorithmic recourse is not reliably beneficial, and therefore requires rethinking.
- Score: 10.067421338825545
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Algorithmic recourse provides explanations that help users overturn an
unfavorable decision by a machine learning system. But so far very little
attention has been paid to whether providing recourse is beneficial or not. We
introduce an abstract learning-theoretic framework that compares the risks
(i.e., expected losses) for classification with and without algorithmic
recourse. This allows us to answer the question of when providing recourse is
beneficial or harmful at the population level. Surprisingly, we find that there
are many plausible scenarios in which providing recourse turns out to be
harmful, because it pushes users to regions of higher class uncertainty and
therefore leads to more mistakes. We further study whether the party deploying
the classifier has an incentive to strategize in anticipation of having to
provide recourse, and we find that sometimes they do, to the detriment of their
users. Providing algorithmic recourse may therefore also be harmful at the
systemic level. We confirm our theoretical findings in experiments on simulated
and real-world data. All in all, we conclude that the current concept of
algorithmic recourse is not reliably beneficial, and therefore requires
rethinking.
Related papers
- Avoiding Catastrophe in Online Learning by Asking for Help [7.881265948305421]
We propose an online learning problem where the goal is to minimize the chance of catastrophe.
We provide an algorithm whose regret and rate of querying the mentor both approach 0 as the time horizon grows.
arXiv Detail & Related papers (2024-02-12T21:12:11Z) - User Strategization and Trustworthy Algorithms [81.82279667028423]
We show that user strategization can actually help platforms in the short term.
We then show that it corrupts platforms' data and ultimately hurts their ability to make counterfactual decisions.
arXiv Detail & Related papers (2023-12-29T16:09:42Z) - On the Trade-Off between Actionable Explanations and the Right to be
Forgotten [21.26254644739585]
We study the problem of recourse invalidation in the context of data deletion requests.
We show that the removal of as little as 2 data instances from the training set can invalidate up to 95 percent of all recourses output by popular state-of-the-art algorithms.
arXiv Detail & Related papers (2022-08-30T10:35:32Z) - From Explanation to Recommendation: Ethical Standards for Algorithmic
Recourse [0.0]
We argue that recourse should be viewed as a recommendation problem, not an explanation problem.
We illustrate by considering the case of diversity constraints on algorithmic recourse.
arXiv Detail & Related papers (2022-05-30T20:09:42Z) - Risk Preferences of Learning Algorithms [0.0]
We show that a widely used learning algorithm, $varepsilon$-Greedy, exhibits emergent risk aversion.
We discuss two methods to correct this bias.
arXiv Detail & Related papers (2022-05-10T01:30:24Z) - Sayer: Using Implicit Feedback to Optimize System Policies [63.992191765269396]
We develop a methodology that leverages implicit feedback to evaluate and train new system policies.
Sayer builds on two ideas from reinforcement learning to leverage data collected by an existing policy.
We show that Sayer can evaluate arbitrary policies accurately, and train new policies that outperform the production policies.
arXiv Detail & Related papers (2021-10-28T04:16:56Z) - MURAL: Meta-Learning Uncertainty-Aware Rewards for Outcome-Driven
Reinforcement Learning [65.52675802289775]
We show that an uncertainty aware classifier can solve challenging reinforcement learning problems.
We propose a novel method for computing the normalized maximum likelihood (NML) distribution.
We show that the resulting algorithm has a number of intriguing connections to both count-based exploration methods and prior algorithms for learning reward functions.
arXiv Detail & Related papers (2021-07-15T08:19:57Z) - Efficient First-Order Contextual Bandits: Prediction, Allocation, and
Triangular Discrimination [82.52105963476703]
A recurring theme in statistical learning, online learning, and beyond is that faster convergence rates are possible for problems with low noise.
First-order guarantees are relatively well understood in statistical and online learning.
We show that the logarithmic loss and an information-theoretic quantity called the triangular discrimination play a fundamental role in obtaining first-order guarantees.
arXiv Detail & Related papers (2021-07-05T19:20:34Z) - Risk Minimization from Adaptively Collected Data: Guarantees for
Supervised and Policy Learning [57.88785630755165]
Empirical risk minimization (ERM) is the workhorse of machine learning, but its model-agnostic guarantees can fail when we use adaptively collected data.
We study a generic importance sampling weighted ERM algorithm for using adaptively collected data to minimize the average of a loss function over a hypothesis class.
For policy learning, we provide rate-optimal regret guarantees that close an open gap in the existing literature whenever exploration decays to zero.
arXiv Detail & Related papers (2021-06-03T09:50:13Z) - Algorithmic recourse under imperfect causal knowledge: a probabilistic
approach [15.124107808802703]
We show that it is impossible to guarantee recourse without access to the true structural equations.
We propose two probabilistic approaches to select optimal actions that achieve recourse with high probability given limited causal knowledge.
arXiv Detail & Related papers (2020-06-11T21:19:07Z) - Fairness-Aware Explainable Recommendation over Knowledge Graphs [73.81994676695346]
We analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups.
We show that inactive users may be more susceptible to receiving unsatisfactory recommendations, due to insufficient training data for the inactive users.
We propose a fairness constrained approach via re-ranking to mitigate this problem in the context of explainable recommendation over knowledge graphs.
arXiv Detail & Related papers (2020-06-03T05:04:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.