Mitigating Algorithmic Bias on Facial Expression Recognition
- URL: http://arxiv.org/abs/2312.15307v1
- Date: Sat, 23 Dec 2023 17:41:30 GMT
- Title: Mitigating Algorithmic Bias on Facial Expression Recognition
- Authors: Glauco Amigo, Pablo Rivas Perea, Robert J. Marks
- Abstract summary: Biased datasets are ubiquitous and present a challenge for machine learning.
The problem of biased datasets is especially sensitive when dealing with minority people groups.
This work explores one way to mitigate bias using a debiasing variational autoencoder with experiments on facial expression recognition.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Biased datasets are ubiquitous and present a challenge for machine learning.
For a number of categories on a dataset that are equally important but some are
sparse and others are common, the learning algorithms will favor the ones with
more presence. The problem of biased datasets is especially sensitive when
dealing with minority people groups. How can we, from biased data, generate
algorithms that treat every person equally? This work explores one way to
mitigate bias using a debiasing variational autoencoder with experiments on
facial expression recognition.
Related papers
- Mitigating Representation Bias in Action Recognition: Algorithms and
Benchmarks [76.35271072704384]
Deep learning models perform poorly when applied to videos with rare scenes or objects.
We tackle this problem from two different angles: algorithm and dataset.
We show that the debiased representation can generalize better when transferred to other datasets and tasks.
arXiv Detail & Related papers (2022-09-20T00:30:35Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Assessing Gender Bias in Predictive Algorithms using eXplainable AI [1.9798034349981162]
Predictive algorithms have a powerful potential to offer benefits in areas as varied as medicine or education.
They can inherit the bias and prejudices present in humans.
The outcomes can systematically repeat errors that create unfair results.
arXiv Detail & Related papers (2022-03-19T07:47:45Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Algorithms are not neutral: Bias in collaborative filtering [0.0]
Discussions of algorithmic bias tend to focus on examples where either the data or the people building the algorithms are biased.
This is illustrated with the example of collaborative filtering, which is known to suffer from popularity, and homogenizing biases.
Popularity and homogenizing biases have the effect of further marginalizing the already marginal.
arXiv Detail & Related papers (2021-05-03T17:28:43Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Towards causal benchmarking of bias in face analysis algorithms [54.19499274513654]
We develop an experimental method for measuring algorithmic bias of face analysis algorithms.
Our proposed method is based on generating synthetic transects'' of matched sample images.
We validate our method by comparing it to a study that employs the traditional observational method for analyzing bias in gender classification algorithms.
arXiv Detail & Related papers (2020-07-13T17:10:34Z) - Underestimation Bias and Underfitting in Machine Learning [2.639737913330821]
What is termed algorithmic bias in machine learning will be due to historic bias in the training data.
Sometimes the bias may be introduced (or at least exacerbated) by the algorithm itself.
In this paper we report on initial research to understand the factors that contribute to bias in classification algorithms.
arXiv Detail & Related papers (2020-05-18T20:01:56Z) - Contrastive Examples for Addressing the Tyranny of the Majority [83.93825214500131]
We propose to create a balanced training dataset, consisting of the original dataset plus new data points in which the group memberships are intervened.
We show that current generative adversarial networks are a powerful tool for learning these data points, called contrastive examples.
arXiv Detail & Related papers (2020-04-14T14:06:44Z) - A survey of bias in Machine Learning through the prism of Statistical
Parity for the Adult Data Set [5.277804553312449]
We show the importance of understanding how a bias can be introduced into automatic decisions.
We first present a mathematical framework for the fair learning problem, specifically in the binary classification setting.
We then propose to quantify the presence of bias by using the standard Disparate Impact index on the real and well-known Adult income data set.
arXiv Detail & Related papers (2020-03-31T14:48:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.