Gender Stereotyping Impact in Facial Expression Recognition
- URL: http://arxiv.org/abs/2210.05332v1
- Date: Tue, 11 Oct 2022 10:52:23 GMT
- Title: Gender Stereotyping Impact in Facial Expression Recognition
- Authors: Iris Dominguez-Catena, Daniel Paternain and Mikel Galar
- Abstract summary: In recent years, machine learning-based models have become the most popular approach to Facial Expression Recognition (FER)
In publicly available FER datasets, apparent gender representation is usually mostly balanced, but their representation in the individual label is not.
We generate derivative datasets with different amounts of stereotypical bias by altering the gender proportions of certain labels.
We observe a discrepancy in the recognition of certain emotions between genders of up to $29 %$ under the worst bias conditions.
- Score: 1.5340540198612824
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Facial Expression Recognition (FER) uses images of faces to identify the
emotional state of users, allowing for a closer interaction between humans and
autonomous systems. Unfortunately, as the images naturally integrate some
demographic information, such as apparent age, gender, and race of the subject,
these systems are prone to demographic bias issues. In recent years, machine
learning-based models have become the most popular approach to FER. These
models require training on large datasets of facial expression images, and
their generalization capabilities are strongly related to the characteristics
of the dataset. In publicly available FER datasets, apparent gender
representation is usually mostly balanced, but their representation in the
individual label is not, embedding social stereotypes into the datasets and
generating a potential for harm. Although this type of bias has been overlooked
so far, it is important to understand the impact it may have in the context of
FER. To do so, we use a popular FER dataset, FER+, to generate derivative
datasets with different amounts of stereotypical bias by altering the gender
proportions of certain labels. We then proceed to measure the discrepancy
between the performance of the models trained on these datasets for the
apparent gender groups. We observe a discrepancy in the recognition of certain
emotions between genders of up to $29 \%$ under the worst bias conditions. Our
results also suggest a safety range for stereotypical bias in a dataset that
does not appear to produce stereotypical bias in the resulting model. Our
findings support the need for a thorough bias analysis of public datasets in
problems like FER, where a global balance of demographic representation can
still hide other types of bias that harm certain demographic groups.
Related papers
- Balancing the Scales: Enhancing Fairness in Facial Expression Recognition with Latent Alignment [5.784550537553534]
This workleverages representation learning based on latent spaces to mitigate bias in facial expression recognition systems.
It also enhances a deep learning model's fairness and overall accuracy.
arXiv Detail & Related papers (2024-10-25T10:03:10Z) - Less can be more: representational vs. stereotypical gender bias in facial expression recognition [3.9698529891342207]
Machine learning models can inherit biases from their training data, leading to discriminatory or inaccurate predictions.
This paper investigates the propagation of demographic biases from datasets into machine learning models.
We focus on the gender demographic component, analyzing two types of bias: representational and stereotypical.
arXiv Detail & Related papers (2024-06-25T09:26:49Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Fairness in AI Systems: Mitigating gender bias from language-vision
models [0.913755431537592]
We study the extent of the impact of gender bias in existing datasets.
We propose a methodology to mitigate its impact in caption based language vision models.
arXiv Detail & Related papers (2023-05-03T04:33:44Z) - Assessing Demographic Bias Transfer from Dataset to Model: A Case Study
in Facial Expression Recognition [1.5340540198612824]
Two metrics focus on the representational and stereotypical bias of the dataset, and the third one on the residual bias of the trained model.
We demonstrate the usefulness of the metrics by applying them to a FER problem based on the popular Affectnet dataset.
arXiv Detail & Related papers (2022-05-20T09:40:42Z) - A Deep Dive into Dataset Imbalance and Bias in Face Identification [49.210042420757894]
Media portrayals often center imbalance as the main source of bias in automated face recognition systems.
Previous studies of data imbalance in FR have focused exclusively on the face verification setting.
This work thoroughly explores the effects of each kind of imbalance possible in face identification, and discuss other factors which may impact bias in this setting.
arXiv Detail & Related papers (2022-03-15T20:23:13Z) - Are Commercial Face Detection Models as Biased as Academic Models? [64.71318433419636]
We compare academic and commercial face detection systems, specifically examining robustness to noise.
We find that state-of-the-art academic face detection models exhibit demographic disparities in their noise robustness.
We conclude that commercial models are always as biased or more biased than an academic model.
arXiv Detail & Related papers (2022-01-25T02:21:42Z) - Balancing Biases and Preserving Privacy on Balanced Faces in the Wild [50.915684171879036]
There are demographic biases present in current facial recognition (FR) models.
We introduce our Balanced Faces in the Wild dataset to measure these biases across different ethnic and gender subgroups.
We find that relying on a single score threshold to differentiate between genuine and imposters sample pairs leads to suboptimal results.
We propose a novel domain adaptation learning scheme that uses facial features extracted from state-of-the-art neural networks.
arXiv Detail & Related papers (2021-03-16T15:05:49Z) - Mitigating Gender Bias in Captioning Systems [56.25457065032423]
Most captioning models learn gender bias, leading to high gender prediction errors, especially for women.
We propose a new Guided Attention Image Captioning model (GAIC) which provides self-guidance on visual attention to encourage the model to capture correct gender visual evidence.
arXiv Detail & Related papers (2020-06-15T12:16:19Z) - Enhancing Facial Data Diversity with Style-based Face Aging [59.984134070735934]
In particular, face datasets are typically biased in terms of attributes such as gender, age, and race.
We propose a novel, generative style-based architecture for data augmentation that captures fine-grained aging patterns.
We show that the proposed method outperforms state-of-the-art algorithms for age transfer.
arXiv Detail & Related papers (2020-06-06T21:53:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.