Exploring Biases and Prejudice of Facial Synthesis via Semantic Latent
Space
- URL: http://arxiv.org/abs/2108.10265v1
- Date: Mon, 23 Aug 2021 16:09:18 GMT
- Title: Exploring Biases and Prejudice of Facial Synthesis via Semantic Latent
Space
- Authors: Xuyang Shen, Jo Plested, Sabrina Caldwell, Tom Gedeon
- Abstract summary: This work targets biased generative models' behaviors, identifying the cause of the biases and eliminating them.
We can (as expected) conclude that biased data causes biased predictions of face frontalization models.
We found that the seemingly obvious choice of 50:50 proportions was not the best for this dataset to reduce biased behavior on female faces.
- Score: 1.858151490268935
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Deep learning (DL) models are widely used to provide a more convenient and
smarter life. However, biased algorithms will negatively influence us. For
instance, groups targeted by biased algorithms will feel unfairly treated and
even fearful of negative consequences of these biases. This work targets biased
generative models' behaviors, identifying the cause of the biases and
eliminating them. We can (as expected) conclude that biased data causes biased
predictions of face frontalization models. Varying the proportions of male and
female faces in the training data can have a substantial effect on behavior on
the test data: we found that the seemingly obvious choice of 50:50 proportions
was not the best for this dataset to reduce biased behavior on female faces,
which was 71% unbiased as compared to our top unbiased rate of 84%. Failure in
generation and generating incorrect gender faces are two behaviors of these
models. In addition, only some layers in face frontalization models are
vulnerable to biased datasets. Optimizing the skip-connections of the generator
in face frontalization models can make models less biased. We conclude that it
is likely to be impossible to eliminate all training bias without an unlimited
size dataset, and our experiments show that the bias can be reduced and
quantified. We believe the next best to a perfect unbiased predictor is one
that has minimized the remaining known bias.
Related papers
- Less can be more: representational vs. stereotypical gender bias in facial expression recognition [3.9698529891342207]
Machine learning models can inherit biases from their training data, leading to discriminatory or inaccurate predictions.
This paper investigates the propagation of demographic biases from datasets into machine learning models.
We focus on the gender demographic component, analyzing two types of bias: representational and stereotypical.
arXiv Detail & Related papers (2024-06-25T09:26:49Z) - Fast Model Debias with Machine Unlearning [54.32026474971696]
Deep neural networks might behave in a biased manner in many real-world scenarios.
Existing debiasing methods suffer from high costs in bias labeling or model re-training.
We propose a fast model debiasing framework (FMD) which offers an efficient approach to identify, evaluate and remove biases.
arXiv Detail & Related papers (2023-10-19T08:10:57Z) - The Impact of Debiasing on the Performance of Language Models in
Downstream Tasks is Underestimated [70.23064111640132]
We compare the impact of debiasing on performance across multiple downstream tasks using a wide-range of benchmark datasets.
Experiments show that the effects of debiasing are consistently emphunderestimated across all tasks.
arXiv Detail & Related papers (2023-09-16T20:25:34Z) - Targeted Data Augmentation for bias mitigation [0.0]
We introduce a novel and efficient approach for addressing biases called Targeted Data Augmentation (TDA)
Unlike the laborious task of removing biases, our method proposes to insert biases instead, resulting in improved performance.
To identify biases, we annotated two diverse datasets: a dataset of clinical skin lesions and a dataset of male and female faces.
arXiv Detail & Related papers (2023-08-22T12:25:49Z) - Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face
Recognition [107.58227666024791]
Face recognition systems are widely deployed in safety-critical applications, including law enforcement.
They exhibit bias across a range of socio-demographic dimensions, such as gender and race.
Previous works on bias mitigation largely focused on pre-processing the training data.
arXiv Detail & Related papers (2022-10-18T15:46:05Z) - Gender Stereotyping Impact in Facial Expression Recognition [1.5340540198612824]
In recent years, machine learning-based models have become the most popular approach to Facial Expression Recognition (FER)
In publicly available FER datasets, apparent gender representation is usually mostly balanced, but their representation in the individual label is not.
We generate derivative datasets with different amounts of stereotypical bias by altering the gender proportions of certain labels.
We observe a discrepancy in the recognition of certain emotions between genders of up to $29 %$ under the worst bias conditions.
arXiv Detail & Related papers (2022-10-11T10:52:23Z) - Pseudo Bias-Balanced Learning for Debiased Chest X-ray Classification [57.53567756716656]
We study the problem of developing debiased chest X-ray diagnosis models without knowing exactly the bias labels.
We propose a novel algorithm, pseudo bias-balanced learning, which first captures and predicts per-sample bias labels.
Our proposed method achieved consistent improvements over other state-of-the-art approaches.
arXiv Detail & Related papers (2022-03-18T11:02:18Z) - Balancing Biases and Preserving Privacy on Balanced Faces in the Wild [50.915684171879036]
There are demographic biases present in current facial recognition (FR) models.
We introduce our Balanced Faces in the Wild dataset to measure these biases across different ethnic and gender subgroups.
We find that relying on a single score threshold to differentiate between genuine and imposters sample pairs leads to suboptimal results.
We propose a novel domain adaptation learning scheme that uses facial features extracted from state-of-the-art neural networks.
arXiv Detail & Related papers (2021-03-16T15:05:49Z) - Towards Robustifying NLI Models Against Lexical Dataset Biases [94.79704960296108]
This paper explores both data-level and model-level debiasing methods to robustify models against lexical dataset biases.
First, we debias the dataset through data augmentation and enhancement, but show that the model bias cannot be fully removed via this method.
The second approach employs a bag-of-words sub-model to capture the features that are likely to exploit the bias and prevents the original model from learning these biased features.
arXiv Detail & Related papers (2020-05-10T17:56:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.