Uncovering the Bias in Facial Expressions
- URL: http://arxiv.org/abs/2011.11311v2
- Date: Tue, 16 Nov 2021 09:34:55 GMT
- Title: Uncovering the Bias in Facial Expressions
- Authors: Jessica Deuschel, Bettina Finzel, Ines Rieger
- Abstract summary: We train a neural network for Action Unit classification and analyze its performance quantitatively based on its accuracy and qualitatively based on heatmaps.
A structured review of our results indicates that we are able to detect bias.
Even though we cannot conclude from our results that lower classification performance emerged solely from gender and skin color bias, these biases must be addressed.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over the past decades the machine and deep learning community has celebrated
great achievements in challenging tasks such as image classification. The deep
architecture of artificial neural networks together with the plenitude of
available data makes it possible to describe highly complex relations. Yet, it
is still impossible to fully capture what the deep learning model has learned
and to verify that it operates fairly and without creating bias, especially in
critical tasks, for instance those arising in the medical field. One example
for such a task is the detection of distinct facial expressions, called Action
Units, in facial images. Considering this specific task, our research aims to
provide transparency regarding bias, specifically in relation to gender and
skin color. We train a neural network for Action Unit classification and
analyze its performance quantitatively based on its accuracy and qualitatively
based on heatmaps. A structured review of our results indicates that we are
able to detect bias. Even though we cannot conclude from our results that lower
classification performance emerged solely from gender and skin color bias,
these biases must be addressed, which is why we end by giving suggestions on
how the detected bias can be avoided.
Related papers
- Evaluating Visual Number Discrimination in Deep Neural Networks [8.447161322658628]
We show that vision-specific inductive biases are helpful in numerosity discrimination.
Even the strongest models, as measured on standard metrics of performance, fail to discriminate quantities in transfer experiments.
arXiv Detail & Related papers (2023-03-13T15:14:26Z) - Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face
Recognition [107.58227666024791]
Face recognition systems are widely deployed in safety-critical applications, including law enforcement.
They exhibit bias across a range of socio-demographic dimensions, such as gender and race.
Previous works on bias mitigation largely focused on pre-processing the training data.
arXiv Detail & Related papers (2022-10-18T15:46:05Z) - Unsupervised Learning of Unbiased Visual Representations [10.871587311621974]
Deep neural networks are known for their inability to learn robust representations when biases exist in the dataset.
We propose a fully unsupervised debiasing framework, consisting of three steps.
We employ state-of-the-art supervised debiasing techniques to obtain an unbiased model.
arXiv Detail & Related papers (2022-04-26T10:51:50Z) - Rethinking the Image Feature Biases Exhibited by Deep CNN Models [14.690952990358095]
We train two classification tasks based on human intuition to identify anticipated biases.
We conclude that the combined effect of certain features is typically far more influential than any single feature.
arXiv Detail & Related papers (2021-11-03T08:04:06Z) - Understanding and Mitigating Annotation Bias in Facial Expression
Recognition [3.325054486984015]
Most existing works assume that human-generated annotations can be considered gold-standard and unbiased.
We focus on facial expression recognition and compare the label biases between lab-controlled and in-the-wild datasets.
We propose an AU-Calibrated Facial Expression Recognition framework that utilizes facial action units (AUs) and incorporates the triplet loss into the objective function.
arXiv Detail & Related papers (2021-08-19T05:28:07Z) - Unravelling the Effect of Image Distortions for Biased Prediction of
Pre-trained Face Recognition Models [86.79402670904338]
We evaluate the performance of four state-of-the-art deep face recognition models in the presence of image distortions.
We have observed that image distortions have a relationship with the performance gap of the model across different subgroups.
arXiv Detail & Related papers (2021-08-14T16:49:05Z) - Affect Analysis in-the-wild: Valence-Arousal, Expressions, Action Units
and a Unified Framework [83.21732533130846]
The paper focuses on large in-the-wild databases, i.e., Aff-Wild and Aff-Wild2.
It presents the design of two classes of deep neural networks trained with these databases.
A novel multi-task and holistic framework is presented which is able to jointly learn and effectively generalize and perform affect recognition.
arXiv Detail & Related papers (2021-03-29T17:36:20Z) - Responsible AI: Gender bias assessment in emotion recognition [6.833826997240138]
This research work aims to study a gender bias in deep learning methods for facial expression recognition.
More biased neural networks show bigger accuracy gap in emotion recognition between male and female test sets.
arXiv Detail & Related papers (2021-03-21T17:00:21Z) - Learning from Failure: Training Debiased Classifier from Biased
Classifier [76.52804102765931]
We show that neural networks learn to rely on spurious correlation only when it is "easier" to learn than the desired knowledge.
We propose a failure-based debiasing scheme by training a pair of neural networks simultaneously.
Our method significantly improves the training of the network against various types of biases in both synthetic and real-world datasets.
arXiv Detail & Related papers (2020-07-06T07:20:29Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z) - InsideBias: Measuring Bias in Deep Networks and Application to Face
Gender Biometrics [73.85525896663371]
This work explores the biases in learning processes based on deep neural network architectures.
We employ two gender detection models based on popular deep neural networks.
We propose InsideBias, a novel method to detect biased models.
arXiv Detail & Related papers (2020-04-14T15:20:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.