From the Fair Distribution of Predictions to the Fair Distribution of Social Goods: Evaluating the Impact of Fair Machine Learning on Long-Term Unemployment
- URL: http://arxiv.org/abs/2401.14438v2
- Date: Mon, 17 Jun 2024 09:10:30 GMT
- Title: From the Fair Distribution of Predictions to the Fair Distribution of Social Goods: Evaluating the Impact of Fair Machine Learning on Long-Term Unemployment
- Authors: Sebastian Zezulka, Konstantin Genin,
- Abstract summary: We argue that addressing this problem requires a notion of prospective fairness that anticipates the change in the distribution of social goods after deployment.
We are guided by an application from public administration: the use of algorithms to predict who among the recently unemployed will remain unemployed in the long term.
We simulate how such algorithmically informed policies would affect gender inequalities in long-term unemployment.
- Score: 3.683202928838613
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deploying an algorithmically informed policy is a significant intervention in society. Prominent methods for algorithmic fairness focus on the distribution of predictions at the time of training, rather than the distribution of social goods that arises after deploying the algorithm in a specific social context. However, requiring a "fair" distribution of predictions may undermine efforts at establishing a fair distribution of social goods. First, we argue that addressing this problem requires a notion of prospective fairness that anticipates the change in the distribution of social goods after deployment. Second, we provide formal conditions under which this change is identified from pre-deployment data. That requires accounting for different kinds of performative effects. Here, we focus on the way predictions change policy decisions and, consequently, the causally downstream distribution of social goods. Throughout, we are guided by an application from public administration: the use of algorithms to predict who among the recently unemployed will remain unemployed in the long term and to target them with labor market programs. Third, using administrative data from the Swiss public employment service, we simulate how such algorithmically informed policies would affect gender inequalities in long-term unemployment. When risk predictions are required to be "fair" according to statistical parity and equality of opportunity, targeting decisions are less effective, undermining efforts to both lower overall levels of long-term unemployment and to close the gender gap in long-term unemployment.
Related papers
- Achieving Fairness in Predictive Process Analytics via Adversarial Learning [50.31323204077591]
This paper addresses the challenge of integrating a debiasing phase into predictive business process analytics.
Our framework leverages on adversial debiasing is evaluated on four case studies, showing a significant reduction in the contribution of biased variables to the predicted value.
arXiv Detail & Related papers (2024-10-03T15:56:03Z) - Performative Prediction on Games and Mechanism Design [69.7933059664256]
We study a collective risk dilemma where agents decide whether to trust predictions based on past accuracy.
As predictions shape collective outcomes, social welfare arises naturally as a metric of concern.
We show how to achieve better trade-offs and use them for mechanism design.
arXiv Detail & Related papers (2024-08-09T16:03:44Z) - Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - The Impact of Differential Feature Under-reporting on Algorithmic Fairness [86.275300739926]
We present an analytically tractable model of differential feature under-reporting.
We then use to characterize the impact of this kind of data bias on algorithmic fairness.
Our results show that, in real world data settings, under-reporting typically leads to increasing disparities.
arXiv Detail & Related papers (2024-01-16T19:16:22Z) - Performativity and Prospective Fairness [4.3512163406552]
We focus on the algorithmic effect on the causally downstream outcome variable.
We show how to predict whether such policies will exacerbate gender inequalities in the labor market.
arXiv Detail & Related papers (2023-10-12T14:18:13Z) - Fairness Transferability Subject to Bounded Distribution Shift [5.62716254065607]
Given an algorithmic predictor that is "fair" on some source distribution, will it still be fair on an unknown target distribution that differs from the source within some bound?
We study the transferability of statistical group fairness for machine learning predictors subject to bounded distribution shifts.
arXiv Detail & Related papers (2022-05-31T22:16:44Z) - Domain Adaptation meets Individual Fairness. And they get along [48.95808607591299]
We show that algorithmic fairness interventions can help machine learning models overcome distribution shifts.
In particular, we show that enforcing suitable notions of individual fairness (IF) can improve the out-of-distribution accuracy of ML models.
arXiv Detail & Related papers (2022-05-01T16:19:55Z) - Fair When Trained, Unfair When Deployed: Observable Fairness Measures
are Unstable in Performative Prediction Settings [0.0]
In performative prediction settings, predictors are precisely intended to induce distribution shift.
In criminal justice, healthcare, and consumer finance, the purpose of building a predictor is to reduce the rate of adverse outcomes.
We show how many of these issues can be avoided by using fairness definitions that depend on counterfactual rather than observable outcomes.
arXiv Detail & Related papers (2022-02-10T14:09:02Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - Fairness in Algorithmic Profiling: A German Case Study [0.0]
We compare and evaluate statistical models for predicting job seekers' risk of becoming long-term unemployed.
We show that these models can be used to predict long-term unemployment with competitive levels of accuracy.
We highlight that different classification policies have very different fairness implications.
arXiv Detail & Related papers (2021-08-04T13:43:42Z) - Distributive Justice and Fairness Metrics in Automated Decision-making:
How Much Overlap Is There? [0.0]
We show that metrics implementing equality of opportunity only apply when resource allocations are based on deservingness, but fail when allocations should reflect concerns about egalitarianism, sufficiency, and priority.
We argue that by cleanly distinguishing between prediction tasks and decision tasks, research on fair machine learning could take better advantage of the rich literature on distributive justice.
arXiv Detail & Related papers (2021-05-04T12:09:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.