Fairness in Algorithmic Profiling: A German Case Study
- URL: http://arxiv.org/abs/2108.04134v1
- Date: Wed, 4 Aug 2021 13:43:42 GMT
- Title: Fairness in Algorithmic Profiling: A German Case Study
- Authors: Christoph Kern, Ruben L. Bach, Hannah Mautner and Frauke Kreuter
- Abstract summary: We compare and evaluate statistical models for predicting job seekers' risk of becoming long-term unemployed.
We show that these models can be used to predict long-term unemployment with competitive levels of accuracy.
We highlight that different classification policies have very different fairness implications.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Algorithmic profiling is increasingly used in the public sector as a means to
allocate limited public resources effectively and objectively. One example is
the prediction-based statistical profiling of job seekers to guide the
allocation of support measures by public employment services. However,
empirical evaluations of potential side-effects such as unintended
discrimination and fairness concerns are rare. In this study, we compare and
evaluate statistical models for predicting job seekers' risk of becoming
long-term unemployed with respect to prediction performance, fairness metrics,
and vulnerabilities to data analysis decisions. Focusing on Germany as a use
case, we evaluate profiling models under realistic conditions by utilizing
administrative data on job seekers' employment histories that are routinely
collected by German public employment services. Besides showing that these data
can be used to predict long-term unemployment with competitive levels of
accuracy, we highlight that different classification policies have very
different fairness implications. We therefore call for rigorous auditing
processes before such models are put to practice.
Related papers
- Targeted Learning for Data Fairness [52.59573714151884]
We expand fairness inference by evaluating fairness in the data generating process itself.
We derive estimators demographic parity, equal opportunity, and conditional mutual information.
To validate our approach, we perform several simulations and apply our estimators to real data.
arXiv Detail & Related papers (2025-02-06T18:51:28Z) - The Value of Prediction in Identifying the Worst-Off [3.468330970960535]
Machine learning is increasingly used in government programs to identify and support the most vulnerable individuals.
This paper examines the welfare impacts of prediction in equity-driven contexts, and how they compare to other policy levers.
arXiv Detail & Related papers (2025-01-31T17:34:53Z) - Auditing for Bias in Ad Delivery Using Inferred Demographic Attributes [50.37313459134418]
We study the effects of inference error on auditing for bias in one prominent application: black-box audit of ad delivery using paired ads.
We propose a way to mitigate the inference error when evaluating skew in ad delivery algorithms.
arXiv Detail & Related papers (2024-10-30T18:57:03Z) - Achieving Fairness in Predictive Process Analytics via Adversarial Learning [50.31323204077591]
This paper addresses the challenge of integrating a debiasing phase into predictive business process analytics.
Our framework leverages on adversial debiasing is evaluated on four case studies, showing a significant reduction in the contribution of biased variables to the predicted value.
arXiv Detail & Related papers (2024-10-03T15:56:03Z) - From the Fair Distribution of Predictions to the Fair Distribution of Social Goods: Evaluating the Impact of Fair Machine Learning on Long-Term Unemployment [3.683202928838613]
We argue that addressing this problem requires a notion of prospective fairness that anticipates the change in the distribution of social goods after deployment.
We are guided by an application from public administration: the use of algorithms to predict who among the recently unemployed will remain unemployed in the long term.
We simulate how such algorithmically informed policies would affect gender inequalities in long-term unemployment.
arXiv Detail & Related papers (2024-01-25T14:17:11Z) - The Impact of Differential Feature Under-reporting on Algorithmic Fairness [86.275300739926]
We present an analytically tractable model of differential feature under-reporting.
We then use to characterize the impact of this kind of data bias on algorithmic fairness.
Our results show that, in real world data settings, under-reporting typically leads to increasing disparities.
arXiv Detail & Related papers (2024-01-16T19:16:22Z) - Systematic Evaluation of Predictive Fairness [60.0947291284978]
Mitigating bias in training on biased datasets is an important open problem.
We examine the performance of various debiasing methods across multiple tasks.
We find that data conditions have a strong influence on relative model performance.
arXiv Detail & Related papers (2022-10-17T05:40:13Z) - Long-term dynamics of fairness: understanding the impact of data-driven
targeted help on job seekers [1.357291726431012]
We use an approach that combines statistics and machine learning to assess long-term fairness effects of labor market interventions.
We develop and use a model to investigate the impact of decisions caused by a public employment authority that selectively supports job-seekers.
We conclude that in order to quantify the trade-off correctly and to assess the long-term fairness effects of such a system in the real-world, careful modeling of the surrounding labor market is indispensable.
arXiv Detail & Related papers (2022-08-17T12:03:23Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.