Fairness in Algorithmic Profiling: A German Case Study
- URL: http://arxiv.org/abs/2108.04134v1
- Date: Wed, 4 Aug 2021 13:43:42 GMT
- Title: Fairness in Algorithmic Profiling: A German Case Study
- Authors: Christoph Kern, Ruben L. Bach, Hannah Mautner and Frauke Kreuter
- Abstract summary: We compare and evaluate statistical models for predicting job seekers' risk of becoming long-term unemployed.
We show that these models can be used to predict long-term unemployment with competitive levels of accuracy.
We highlight that different classification policies have very different fairness implications.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Algorithmic profiling is increasingly used in the public sector as a means to
allocate limited public resources effectively and objectively. One example is
the prediction-based statistical profiling of job seekers to guide the
allocation of support measures by public employment services. However,
empirical evaluations of potential side-effects such as unintended
discrimination and fairness concerns are rare. In this study, we compare and
evaluate statistical models for predicting job seekers' risk of becoming
long-term unemployed with respect to prediction performance, fairness metrics,
and vulnerabilities to data analysis decisions. Focusing on Germany as a use
case, we evaluate profiling models under realistic conditions by utilizing
administrative data on job seekers' employment histories that are routinely
collected by German public employment services. Besides showing that these data
can be used to predict long-term unemployment with competitive levels of
accuracy, we highlight that different classification policies have very
different fairness implications. We therefore call for rigorous auditing
processes before such models are put to practice.
Related papers
- Auditing for Bias in Ad Delivery Using Inferred Demographic Attributes [50.37313459134418]
We study the effects of inference error on auditing for bias in one prominent application: black-box audit of ad delivery using paired ads.
We propose a way to mitigate the inference error when evaluating skew in ad delivery algorithms.
arXiv Detail & Related papers (2024-10-30T18:57:03Z) - Achieving Fairness in Predictive Process Analytics via Adversarial Learning [50.31323204077591]
This paper addresses the challenge of integrating a debiasing phase into predictive business process analytics.
Our framework leverages on adversial debiasing is evaluated on four case studies, showing a significant reduction in the contribution of biased variables to the predicted value.
arXiv Detail & Related papers (2024-10-03T15:56:03Z) - From the Fair Distribution of Predictions to the Fair Distribution of Social Goods: Evaluating the Impact of Fair Machine Learning on Long-Term Unemployment [3.683202928838613]
We argue that addressing this problem requires a notion of prospective fairness that anticipates the change in the distribution of social goods after deployment.
We are guided by an application from public administration: the use of algorithms to predict who among the recently unemployed will remain unemployed in the long term.
We simulate how such algorithmically informed policies would affect gender inequalities in long-term unemployment.
arXiv Detail & Related papers (2024-01-25T14:17:11Z) - The Impact of Differential Feature Under-reporting on Algorithmic Fairness [86.275300739926]
We present an analytically tractable model of differential feature under-reporting.
We then use to characterize the impact of this kind of data bias on algorithmic fairness.
Our results show that, in real world data settings, under-reporting typically leads to increasing disparities.
arXiv Detail & Related papers (2024-01-16T19:16:22Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Consistent Range Approximation for Fair Predictive Modeling [10.613912061919775]
The framework builds predictive models that are certifiably fair on the target population, regardless of the availability of external data during training.
The framework's efficacy is demonstrated through evaluations on real data, showing substantial improvement over existing state-of-the-art methods.
arXiv Detail & Related papers (2022-12-21T08:27:49Z) - Systematic Evaluation of Predictive Fairness [60.0947291284978]
Mitigating bias in training on biased datasets is an important open problem.
We examine the performance of various debiasing methods across multiple tasks.
We find that data conditions have a strong influence on relative model performance.
arXiv Detail & Related papers (2022-10-17T05:40:13Z) - Long-term dynamics of fairness: understanding the impact of data-driven
targeted help on job seekers [1.357291726431012]
We use an approach that combines statistics and machine learning to assess long-term fairness effects of labor market interventions.
We develop and use a model to investigate the impact of decisions caused by a public employment authority that selectively supports job-seekers.
We conclude that in order to quantify the trade-off correctly and to assess the long-term fairness effects of such a system in the real-world, careful modeling of the surrounding labor market is indispensable.
arXiv Detail & Related papers (2022-08-17T12:03:23Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.