Performativity and Prospective Fairness
- URL: http://arxiv.org/abs/2310.08349v2
- Date: Wed, 13 Dec 2023 17:12:27 GMT
- Title: Performativity and Prospective Fairness
- Authors: Sebastian Zezulka and Konstantin Genin
- Abstract summary: We focus on the algorithmic effect on the causally downstream outcome variable.
We show how to predict whether such policies will exacerbate gender inequalities in the labor market.
- Score: 4.3512163406552
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deploying an algorithmically informed policy is a significant intervention in
the structure of society. As is increasingly acknowledged, predictive
algorithms have performative effects: using them can shift the distribution of
social outcomes away from the one on which the algorithms were trained.
Algorithmic fairness research is usually motivated by the worry that these
performative effects will exacerbate the structural inequalities that gave rise
to the training data. However, standard retrospective fairness methodologies
are ill-suited to predict these effects. They impose static fairness
constraints that hold after the predictive algorithm is trained, but before it
is deployed and, therefore, before performative effects have had a chance to
kick in. However, satisfying static fairness criteria after training is not
sufficient to avoid exacerbating inequality after deployment. Addressing the
fundamental worry that motivates algorithmic fairness requires explicitly
comparing the change in relevant structural inequalities before and after
deployment. We propose a prospective methodology for estimating this
post-deployment change from pre-deployment data and knowledge about the
algorithmic policy. That requires a strategy for distinguishing between, and
accounting for, different kinds of performative effects. In this paper, we
focus on the algorithmic effect on the causally downstream outcome variable.
Throughout, we are guided by an application from public administration: the use
of algorithms to (1) predict who among the recently unemployed will stay
unemployed for the long term and (2) targeting them with labor market programs.
We illustrate our proposal by showing how to predict whether such policies will
exacerbate gender inequalities in the labor market.
Related papers
- Achieving Fairness in Predictive Process Analytics via Adversarial Learning [50.31323204077591]
This paper addresses the challenge of integrating a debiasing phase into predictive business process analytics.
Our framework leverages on adversial debiasing is evaluated on four case studies, showing a significant reduction in the contribution of biased variables to the predicted value.
arXiv Detail & Related papers (2024-10-03T15:56:03Z) - Algorithmic Fairness in Performative Policy Learning: Escaping the Impossibility of Group Fairness [19.183108418687226]
We develop algorithmic fairness practices that leverage performativity to achieve stronger group fairness guarantees in social classification problems.
A crucial benefit of this approach is that it is possible to resolve the incompatibilities between conflicting group fairness definitions.
arXiv Detail & Related papers (2024-05-30T19:46:47Z) - From the Fair Distribution of Predictions to the Fair Distribution of Social Goods: Evaluating the Impact of Fair Machine Learning on Long-Term Unemployment [3.683202928838613]
We argue that addressing this problem requires a notion of prospective fairness that anticipates the change in the distribution of social goods after deployment.
We are guided by an application from public administration: the use of algorithms to predict who among the recently unemployed will remain unemployed in the long term.
We simulate how such algorithmically informed policies would affect gender inequalities in long-term unemployment.
arXiv Detail & Related papers (2024-01-25T14:17:11Z) - The Impact of Differential Feature Under-reporting on Algorithmic Fairness [86.275300739926]
We present an analytically tractable model of differential feature under-reporting.
We then use to characterize the impact of this kind of data bias on algorithmic fairness.
Our results show that, in real world data settings, under-reporting typically leads to increasing disparities.
arXiv Detail & Related papers (2024-01-16T19:16:22Z) - The Relative Value of Prediction in Algorithmic Decision Making [0.0]
We ask: What is the relative value of prediction in algorithmic decision making?
We identify simple, sharp conditions determining the relative value of prediction vis-a-vis expanding access.
We illustrate how these theoretical insights may be used to guide the design of algorithmic decision making systems in practice.
arXiv Detail & Related papers (2023-12-13T20:52:45Z) - Non-Clairvoyant Scheduling with Predictions Revisited [77.86290991564829]
In non-clairvoyant scheduling, the task is to find an online strategy for scheduling jobs with a priori unknown processing requirements.
We revisit this well-studied problem in a recently popular learning-augmented setting that integrates (untrusted) predictions in algorithm design.
We show that these predictions have desired properties, admit a natural error measure as well as algorithms with strong performance guarantees.
arXiv Detail & Related papers (2022-02-21T13:18:11Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Biased Programmers? Or Biased Data? A Field Experiment in
Operationalizing AI Ethics [6.946103498518291]
We evaluate 8.2 million algorithmic predictions of math performance from $approx$400 AI engineers.
We find that biased predictions are mostly caused by biased training data.
One-third of the benefit of better training data comes through a novel economic mechanism.
arXiv Detail & Related papers (2020-12-04T04:12:33Z) - All of the Fairness for Edge Prediction with Optimal Transport [11.51786288978429]
We study the problem of fairness for the task of edge prediction in graphs.
We propose an embedding-agnostic repairing procedure for the adjacency matrix of an arbitrary graph with a trade-off between the group and individual fairness.
arXiv Detail & Related papers (2020-10-30T15:33:13Z) - Policy Gradient for Continuing Tasks in Non-stationary Markov Decision
Processes [112.38662246621969]
Reinforcement learning considers the problem of finding policies that maximize an expected cumulative reward in a Markov decision process with unknown transition probabilities.
We compute unbiased navigation gradients of the value function which we use as ascent directions to update the policy.
A major drawback of policy gradient-type algorithms is that they are limited to episodic tasks unless stationarity assumptions are imposed.
arXiv Detail & Related papers (2020-10-16T15:15:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.