Long-term dynamics of fairness: understanding the impact of data-driven
targeted help on job seekers
- URL: http://arxiv.org/abs/2208.08881v1
- Date: Wed, 17 Aug 2022 12:03:23 GMT
- Title: Long-term dynamics of fairness: understanding the impact of data-driven
targeted help on job seekers
- Authors: Sebastian Scher, Simone Kopeinik, Andreas Tr\"ugler, Dominik Kowald
- Abstract summary: We use an approach that combines statistics and machine learning to assess long-term fairness effects of labor market interventions.
We develop and use a model to investigate the impact of decisions caused by a public employment authority that selectively supports job-seekers.
We conclude that in order to quantify the trade-off correctly and to assess the long-term fairness effects of such a system in the real-world, careful modeling of the surrounding labor market is indispensable.
- Score: 1.357291726431012
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The use of data-driven decision support by public agencies is becoming more
widespread and already influences the allocation of public resources. This
raises ethical concerns, as it has adversely affected minorities and
historically discriminated groups. In this paper, we use an approach that
combines statistics and machine learning with dynamical modeling to assess
long-term fairness effects of labor market interventions. Specifically, we
develop and use a model to investigate the impact of decisions caused by a
public employment authority that selectively supports job-seekers through
targeted help. The selection of who receives what help is based on a
data-driven intervention model that estimates an individual's chances of
finding a job in a timely manner and is based on data that describes a
population in which skills relevant to the labor market are unevenly
distributed between two groups (e.g., males and females). The intervention
model has incomplete access to the individual's actual skills and can augment
this with knowledge of the individual's group affiliation, thus using a
protected attribute to increase predictive accuracy. We assess this
intervention model's dynamics -- especially fairness-related issues and
trade-offs between different fairness goals -- over time and compare it to an
intervention model that does not use group affiliation as a predictive feature.
We conclude that in order to quantify the trade-off correctly and to assess the
long-term fairness effects of such a system in the real-world, careful modeling
of the surrounding labor market is indispensable.
Related papers
- Performative Prediction on Games and Mechanism Design [69.7933059664256]
We study a collective risk dilemma where agents decide whether to trust predictions based on past accuracy.
As predictions shape collective outcomes, social welfare arises naturally as a metric of concern.
We show how to achieve better trade-offs and use them for mechanism design.
arXiv Detail & Related papers (2024-08-09T16:03:44Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Ground(less) Truth: A Causal Framework for Proxy Labels in
Human-Algorithm Decision-Making [29.071173441651734]
We identify five sources of target variable bias that can impact the validity of proxy labels in human-AI decision-making tasks.
We develop a causal framework to disentangle the relationship between each bias.
We conclude by discussing opportunities to better address target variable bias in future research.
arXiv Detail & Related papers (2023-02-13T16:29:11Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles [50.90773979394264]
This paper studies a model that protects the privacy of individuals' sensitive information while also allowing it to learn non-discriminatory predictors.
A key characteristic of the proposed model is to enable the adoption of off-the-selves and non-private fair models to create a privacy-preserving and fair model.
arXiv Detail & Related papers (2022-04-11T14:42:54Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Fairness in Algorithmic Profiling: A German Case Study [0.0]
We compare and evaluate statistical models for predicting job seekers' risk of becoming long-term unemployed.
We show that these models can be used to predict long-term unemployment with competitive levels of accuracy.
We highlight that different classification policies have very different fairness implications.
arXiv Detail & Related papers (2021-08-04T13:43:42Z) - Fairness-aware Summarization for Justified Decision-Making [16.47665757950391]
We focus on the problem of (un)fairness in the justification of the text-based neural models.
We propose a fairness-aware summarization mechanism to detect and counteract the bias in such models.
arXiv Detail & Related papers (2021-07-13T17:04:10Z) - Information Theoretic Measures for Fairness-aware Feature Selection [27.06618125828978]
We develop a framework for fairness-aware feature selection, based on information theoretic measures for the accuracy and discriminatory impacts of features.
Specifically, our goal is to design a fairness utility score for each feature which quantifies how this feature influences accurate as well as nondiscriminatory decisions.
arXiv Detail & Related papers (2021-06-01T20:11:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.