A survey of bias in Machine Learning through the prism of Statistical
Parity for the Adult Data Set
- URL: http://arxiv.org/abs/2003.14263v2
- Date: Mon, 6 Apr 2020 11:16:10 GMT
- Title: A survey of bias in Machine Learning through the prism of Statistical
Parity for the Adult Data Set
- Authors: Philippe Besse, Eustasio del Barrio, Paula Gordaliza, Jean-Michel
Loubes and Laurent Risser
- Abstract summary: We show the importance of understanding how a bias can be introduced into automatic decisions.
We first present a mathematical framework for the fair learning problem, specifically in the binary classification setting.
We then propose to quantify the presence of bias by using the standard Disparate Impact index on the real and well-known Adult income data set.
- Score: 5.277804553312449
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Applications based on Machine Learning models have now become an
indispensable part of the everyday life and the professional world. A critical
question then recently arised among the population: Do algorithmic decisions
convey any type of discrimination against specific groups of population or
minorities? In this paper, we show the importance of understanding how a bias
can be introduced into automatic decisions. We first present a mathematical
framework for the fair learning problem, specifically in the binary
classification setting. We then propose to quantify the presence of bias by
using the standard Disparate Impact index on the real and well-known Adult
income data set. Finally, we check the performance of different approaches
aiming to reduce the bias in binary classification outcomes. Importantly, we
show that some intuitive methods are ineffective. This sheds light on the fact
trying to make fair machine learning models may be a particularly challenging
task, in particular when the training observations contain a bias.
Related papers
- Fast Model Debias with Machine Unlearning [54.32026474971696]
Deep neural networks might behave in a biased manner in many real-world scenarios.
Existing debiasing methods suffer from high costs in bias labeling or model re-training.
We propose a fast model debiasing framework (FMD) which offers an efficient approach to identify, evaluate and remove biases.
arXiv Detail & Related papers (2023-10-19T08:10:57Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - Are fairness metric scores enough to assess discrimination biases in
machine learning? [4.073786857780967]
We focus on the Bios dataset, and our learning task is to predict the occupation of individuals, based on their biography.
We address an important limitation of theoretical discussions dealing with group-wise fairness metrics: they focus on large datasets.
We then question how reliable are different popular measures of bias when the size of the training set is simply sufficient to learn reasonably accurate predictions.
arXiv Detail & Related papers (2023-06-08T15:56:57Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Understanding Unfairness in Fraud Detection through Model and Data Bias
Interactions [4.159343412286401]
We argue that algorithmic unfairness stems from interactions between models and biases in the data.
We study a set of hypotheses regarding the fairness-accuracy trade-offs that fairness-blind ML algorithms exhibit under different data bias settings.
arXiv Detail & Related papers (2022-07-13T15:18:30Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - On the Basis of Sex: A Review of Gender Bias in Machine Learning
Applications [0.0]
We first introduce several examples of machine learning gender bias in practice.
We then detail the most widely used formalizations of fairness in order to address how to make machine learning models fairer.
arXiv Detail & Related papers (2021-04-06T14:11:16Z) - Fairness in Semi-supervised Learning: Unlabeled Data Help to Reduce
Discrimination [53.3082498402884]
A growing specter in the rise of machine learning is whether the decisions made by machine learning models are fair.
We present a framework of fair semi-supervised learning in the pre-processing phase, including pseudo labeling to predict labels for unlabeled data.
A theoretical decomposition analysis of bias, variance and noise highlights the different sources of discrimination and the impact they have on fairness in semi-supervised learning.
arXiv Detail & Related papers (2020-09-25T05:48:56Z) - Fairness-Aware Online Personalization [16.320648868892526]
We present a study of fairness in online personalization settings involving the ranking of individuals.
We first demonstrate that online personalization can cause the model to learn to act in an unfair manner if the user is biased in his/her responses.
We then formulate the problem of learning personalized models under fairness constraints and present a regularization based approach for mitigating biases in machine learning.
arXiv Detail & Related papers (2020-07-30T07:16:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.