Fairness in Algorithmic Recourse Through the Lens of Substantive
Equality of Opportunity
- URL: http://arxiv.org/abs/2401.16088v1
- Date: Mon, 29 Jan 2024 11:55:45 GMT
- Title: Fairness in Algorithmic Recourse Through the Lens of Substantive
Equality of Opportunity
- Authors: Andrew Bell, Joao Fonseca, Carlo Abrate, Francesco Bonchi, and Julia
Stoyanovich
- Abstract summary: Algorithmic recourse has gained attention as a means of giving persons agency in their interactions with AI systems.
Recent work has shown that recourse itself may be unfair due to differences in the initial circumstances of individuals.
Time is a critical element in recourse because the longer it takes an individual to act, the more the setting may change.
- Score: 15.78130132380848
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Algorithmic recourse -- providing recommendations to those affected
negatively by the outcome of an algorithmic system on how they can take action
and change that outcome -- has gained attention as a means of giving persons
agency in their interactions with artificial intelligence (AI) systems. Recent
work has shown that even if an AI decision-making classifier is ``fair''
(according to some reasonable criteria), recourse itself may be unfair due to
differences in the initial circumstances of individuals, compounding
disparities for marginalized populations and requiring them to exert more
effort than others. There is a need to define more methods and metrics for
evaluating fairness in recourse that span a range of normative views of the
world, and specifically those that take into account time. Time is a critical
element in recourse because the longer it takes an individual to act, the more
the setting may change due to model or data drift.
This paper seeks to close this research gap by proposing two notions of
fairness in recourse that are in normative alignment with substantive equality
of opportunity, and that consider time. The first considers the (often
repeated) effort individuals exert per successful recourse event, and the
second considers time per successful recourse event. Building upon an
agent-based framework for simulating recourse, this paper demonstrates how much
effort is needed to overcome disparities in initial circumstances. We then
proposes an intervention to improve the fairness of recourse by rewarding
effort, and compare it to existing strategies.
Related papers
- Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - Setting the Right Expectations: Algorithmic Recourse Over Time [16.930905275894183]
We propose an agent-based simulation framework for studying the effects of a continuously changing environment on algorithmic recourse.
Our findings highlight that only a small set of specific parameterizations result in algorithmic recourse that is reliable for agents over time.
arXiv Detail & Related papers (2023-09-13T14:04:15Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - Fair Enough: Standardizing Evaluation and Model Selection for Fairness
Research in NLP [64.45845091719002]
Modern NLP systems exhibit a range of biases, which a growing literature on model debiasing attempts to correct.
This paper seeks to clarify the current situation and plot a course for meaningful progress in fair learning.
arXiv Detail & Related papers (2023-02-11T14:54:00Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - Algorithmic Fairness in Business Analytics: Directions for Research and
Practice [24.309795052068388]
This paper offers a forward-looking, BA-focused review of algorithmic fairness.
We first review the state-of-the-art research on sources and measures of bias, as well as bias mitigation algorithms.
We then provide a detailed discussion of the utility-fairness relationship, emphasizing that the frequent assumption of a trade-off between these two constructs is often mistaken or short-sighted.
arXiv Detail & Related papers (2022-07-22T10:21:38Z) - Unfairness Despite Awareness: Group-Fair Classification with Strategic
Agents [37.31138342300617]
We show that strategic agents may possess both the ability and the incentive to manipulate an observed feature vector in order to attain a more favorable outcome.
We further demonstrate that both the increased selectiveness of the fair classifier, and consequently the loss of fairness, arises when performing fair learning on domains in which the advantaged group is overrepresented.
arXiv Detail & Related papers (2021-12-06T02:42:43Z) - On the Fairness of Causal Algorithmic Recourse [36.519629650529666]
We propose two new fairness criteria at the group and individual level.
We show that fairness of recourse is complementary to fairness of prediction.
We discuss whether fairness violations in the data generating process revealed by our criteria may be better addressed by societal interventions.
arXiv Detail & Related papers (2020-10-13T16:35:06Z) - Case Study: Predictive Fairness to Reduce Misdemeanor Recidivism Through
Social Service Interventions [4.651149317838983]
The Los Angeles City Attorney's Office created a new Recidivism Reduction and Drug Diversion unit (R2D2)
We describe a collaboration with this new unit as a case study for the incorporation of predictive equity into machine learning based decision making.
arXiv Detail & Related papers (2020-01-24T23:52:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.