Robustness Implies Fairness in Casual Algorithmic Recourse
- URL: http://arxiv.org/abs/2302.03465v1
- Date: Tue, 7 Feb 2023 13:40:56 GMT
- Title: Robustness Implies Fairness in Casual Algorithmic Recourse
- Authors: Ahmad-Reza Ehyaei, Amir-Hossein Karimi, Bernhard Sch\"olkopf, Setareh
Maghsudi
- Abstract summary: Algorithmic recourse aims to disclose the inner workings of the black-box decision process in situations where decisions have significant consequences.
To ensure an effective remedy, suggested interventions must not only be low-cost but also robust and fair.
This study explores the concept of individual fairness and adversarial robustness in causal algorithmic recourse.
- Score: 13.86376549140248
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Algorithmic recourse aims to disclose the inner workings of the black-box
decision process in situations where decisions have significant consequences,
by providing recommendations to empower beneficiaries to achieve a more
favorable outcome. To ensure an effective remedy, suggested interventions must
not only be low-cost but also robust and fair. This goal is accomplished by
providing similar explanations to individuals who are alike. This study
explores the concept of individual fairness and adversarial robustness in
causal algorithmic recourse and addresses the challenge of achieving both. To
resolve the challenges, we propose a new framework for defining adversarially
robust recourse. The new setting views the protected feature as a pseudometric
and demonstrates that individual fairness is a special case of adversarial
robustness. Finally, we introduce the fair robust recourse problem to achieve
both desirable properties and show how it can be satisfied both theoretically
and empirically.
Related papers
- Fairness in Algorithmic Recourse Through the Lens of Substantive
Equality of Opportunity [15.78130132380848]
Algorithmic recourse has gained attention as a means of giving persons agency in their interactions with AI systems.
Recent work has shown that recourse itself may be unfair due to differences in the initial circumstances of individuals.
Time is a critical element in recourse because the longer it takes an individual to act, the more the setting may change.
arXiv Detail & Related papers (2024-01-29T11:55:45Z) - Optimising Human-AI Collaboration by Learning Convincing Explanations [62.81395661556852]
We propose a method for a collaborative system that remains safe by having a human making decisions.
Ardent enables efficient and effective decision-making by adapting to individual preferences for explanations.
arXiv Detail & Related papers (2023-11-13T16:00:16Z) - Understanding Fairness Surrogate Functions in Algorithmic Fairness [21.555040357521907]
We show that there is a surrogate-fairness gap between the fairness definition and the fairness surrogate function.
We elaborate a novel and general algorithm called Balanced Surrogate, which iteratively reduces the gap to mitigate unfairness.
arXiv Detail & Related papers (2023-10-17T12:40:53Z) - Rethinking Fairness for Human-AI Collaboration [32.969050978497066]
We propose a simple optimization strategy to identify the best performance-improving compliance-robustly fair policy.
It may be infeasible to design algorithmic recommendations that are simultaneously fair in isolation, compliance-robustly fair, and more accurate than the human policy.
arXiv Detail & Related papers (2023-10-05T16:21:42Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - Fairness Increases Adversarial Vulnerability [50.90773979394264]
This paper shows the existence of a dichotomy between fairness and robustness, and analyzes when achieving fairness decreases the model robustness to adversarial samples.
Experiments on non-linear models and different architectures validate the theoretical findings in multiple vision domains.
The paper proposes a simple, yet effective, solution to construct models achieving good tradeoffs between fairness and robustness.
arXiv Detail & Related papers (2022-11-21T19:55:35Z) - On the Complexity of Adversarial Decision Making [101.14158787665252]
We show that the Decision-Estimation Coefficient is necessary and sufficient to obtain low regret for adversarial decision making.
We provide new structural results that connect the Decision-Estimation Coefficient to variants of other well-known complexity measures.
arXiv Detail & Related papers (2022-06-27T06:20:37Z) - On the Adversarial Robustness of Causal Algorithmic Recourse [2.1132376804211543]
Recourse recommendations should ideally be robust to reasonably small uncertainty.
We show that recourse methods offering minimally costly recourse fail to be robust.
We propose a model regularizer that encourages the additional cost of seeking robust recourse to be low.
arXiv Detail & Related papers (2021-12-21T16:00:54Z) - Identifying Best Fair Intervention [7.563864405505623]
We study the problem of best arm identification with a fairness constraint in a given causal model.
The problem is motivated by ensuring fairness on an online marketplace.
arXiv Detail & Related papers (2021-11-08T04:36:54Z) - Beyond Individual and Group Fairness [90.4666341812857]
We present a new data-driven model of fairness that is guided by the unfairness complaints received by the system.
Our model supports multiple fairness criteria and takes into account their potential incompatibilities.
arXiv Detail & Related papers (2020-08-21T14:14:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.