Setting the Right Expectations: Algorithmic Recourse Over Time
- URL: http://arxiv.org/abs/2309.06969v1
- Date: Wed, 13 Sep 2023 14:04:15 GMT
- Title: Setting the Right Expectations: Algorithmic Recourse Over Time
- Authors: Joao Fonseca, Andrew Bell, Carlo Abrate, Francesco Bonchi, Julia
Stoyanovich
- Abstract summary: We propose an agent-based simulation framework for studying the effects of a continuously changing environment on algorithmic recourse.
Our findings highlight that only a small set of specific parameterizations result in algorithmic recourse that is reliable for agents over time.
- Score: 16.930905275894183
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Algorithmic systems are often called upon to assist in high-stakes decision
making. In light of this, algorithmic recourse, the principle wherein
individuals should be able to take action against an undesirable outcome made
by an algorithmic system, is receiving growing attention. The bulk of the
literature on algorithmic recourse to-date focuses primarily on how to provide
recourse to a single individual, overlooking a critical element: the effects of
a continuously changing context. Disregarding these effects on recourse is a
significant oversight, since, in almost all cases, recourse consists of an
individual making a first, unfavorable attempt, and then being given an
opportunity to make one or several attempts at a later date - when the context
might have changed. This can create false expectations, as initial recourse
recommendations may become less reliable over time due to model drift and
competition for access to the favorable outcome between individuals.
In this work we propose an agent-based simulation framework for studying the
effects of a continuously changing environment on algorithmic recourse. In
particular, we identify two main effects that can alter the reliability of
recourse for individuals represented by the agents: (1) competition with other
agents acting upon recourse, and (2) competition with new agents entering the
environment. Our findings highlight that only a small set of specific
parameterizations result in algorithmic recourse that is reliable for agents
over time. Consequently, we argue that substantial additional work is needed to
understand recourse reliability over time, and to develop recourse methods that
reward agents' effort.
Related papers
- Fairness in Algorithmic Recourse Through the Lens of Substantive
Equality of Opportunity [15.78130132380848]
Algorithmic recourse has gained attention as a means of giving persons agency in their interactions with AI systems.
Recent work has shown that recourse itself may be unfair due to differences in the initial circumstances of individuals.
Time is a critical element in recourse because the longer it takes an individual to act, the more the setting may change.
arXiv Detail & Related papers (2024-01-29T11:55:45Z) - Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - Robustness Implies Fairness in Casual Algorithmic Recourse [13.86376549140248]
Algorithmic recourse aims to disclose the inner workings of the black-box decision process in situations where decisions have significant consequences.
To ensure an effective remedy, suggested interventions must not only be low-cost but also robust and fair.
This study explores the concept of individual fairness and adversarial robustness in causal algorithmic recourse.
arXiv Detail & Related papers (2023-02-07T13:40:56Z) - Formalizing the Problem of Side Effect Regularization [81.97441214404247]
We propose a formal criterion for side effect regularization via the assistance game framework.
In these games, the agent solves a partially observable Markov decision process.
We show that this POMDP is solved by trading off the proxy reward with the agent's ability to achieve a range of future tasks.
arXiv Detail & Related papers (2022-06-23T16:36:13Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Probabilistically Robust Recourse: Navigating the Trade-offs between
Costs and Robustness in Algorithmic Recourse [34.39887495671287]
We propose an objective function which simultaneously minimizes the gap between the achieved (resulting) and desired recourse invalidation rates.
We develop novel theoretical results to characterize the recourse invalidation rates corresponding to any given instance.
Experimental evaluation with multiple real world datasets demonstrates the efficacy of the proposed framework.
arXiv Detail & Related papers (2022-03-13T21:39:24Z) - Stateful Strategic Regression [20.7177095411398]
We describe the Stackelberg equilibrium of the resulting game and provide novel algorithms for computing it.
Our analysis reveals several intriguing insights about the role of multiple interactions in shaping the game's outcome.
Most importantly, we show that with multiple rounds of interaction at her disposal, the principal is more effective at incentivizing the agent to accumulate effort in her desired direction.
arXiv Detail & Related papers (2021-06-07T17:46:29Z) - End-to-End Learning and Intervention in Games [60.41921763076017]
We provide a unified framework for learning and intervention in games.
We propose two approaches, respectively based on explicit and implicit differentiation.
The analytical results are validated using several real-world problems.
arXiv Detail & Related papers (2020-10-26T18:39:32Z) - Sequential Transfer in Reinforcement Learning with a Generative Model [48.40219742217783]
We show how to reduce the sample complexity for learning new tasks by transferring knowledge from previously-solved ones.
We derive PAC bounds on its sample complexity which clearly demonstrate the benefits of using this kind of prior knowledge.
We empirically verify our theoretical findings in simple simulated domains.
arXiv Detail & Related papers (2020-07-01T19:53:35Z) - Public Bayesian Persuasion: Being Almost Optimal and Almost Persuasive [57.47546090379434]
We study the public persuasion problem in the general setting with: (i) arbitrary state spaces; (ii) arbitrary action spaces; (iii) arbitrary sender's utility functions.
We provide a quasi-polynomial time bi-criteria approximation algorithm for arbitrary public persuasion problems that, in specific settings, yields a QPTAS.
arXiv Detail & Related papers (2020-02-12T18:59:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.