Responsible AI: Gender bias assessment in emotion recognition
- URL: http://arxiv.org/abs/2103.11436v1
- Date: Sun, 21 Mar 2021 17:00:21 GMT
- Title: Responsible AI: Gender bias assessment in emotion recognition
- Authors: Artem Domnich and Gholamreza Anbarjafari
- Abstract summary: This research work aims to study a gender bias in deep learning methods for facial expression recognition.
More biased neural networks show bigger accuracy gap in emotion recognition between male and female test sets.
- Score: 6.833826997240138
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Rapid development of artificial intelligence (AI) systems amplify many
concerns in society. These AI algorithms inherit different biases from humans
due to mysterious operational flow and because of that it is becoming adverse
in usage. As a result, researchers have started to address the issue by
investigating deeper in the direction towards Responsible and Explainable AI.
Among variety of applications of AI, facial expression recognition might not be
the most important one, yet is considered as a valuable part of human-AI
interaction. Evolution of facial expression recognition from the feature based
methods to deep learning drastically improve quality of such algorithms. This
research work aims to study a gender bias in deep learning methods for facial
expression recognition by investigating six distinct neural networks, training
them, and further analysed on the presence of bias, according to the three
definition of fairness. The main outcomes show which models are gender biased,
which are not and how gender of subject affects its emotion recognition. More
biased neural networks show bigger accuracy gap in emotion recognition between
male and female test sets. Furthermore, this trend keeps for true positive and
false positive rates. In addition, due to the nature of the research, we can
observe which types of emotions are better classified for men and which for
women. Since the topic of biases in facial expression recognition is not well
studied, a spectrum of continuation of this research is truly extensive, and
may comprise detail analysis of state-of-the-art methods, as well as targeting
other biases.
Related papers
- "My Kind of Woman": Analysing Gender Stereotypes in AI through The Averageness Theory and EU Law [0.0]
This study delves into gender classification systems, shedding light on the interaction between social stereotypes and algorithmic determinations.
By incorporating cognitive psychology and feminist legal theory, we examine how data used for AI training can foster gender diversity and fairness.
arXiv Detail & Related papers (2024-06-27T20:03:27Z) - Bias in Generative AI [2.5830293457323266]
This study analyzed images generated by three popular generative artificial intelligence (AI) tools to investigate potential bias in AI generators.
All three AI generators exhibited bias against women and African Americans.
Women were depicted as younger with more smiles and happiness, while men were depicted as older with more neutral expressions and anger.
arXiv Detail & Related papers (2024-03-05T07:34:41Z) - I am Only Happy When There is Light: The Impact of Environmental Changes
on Affective Facial Expressions Recognition [65.69256728493015]
We study the impact of different image conditions on the recognition of arousal from human facial expressions.
Our results show how the interpretation of human affective states can differ greatly in either the positive or negative direction.
arXiv Detail & Related papers (2022-10-28T16:28:26Z) - Assessing Gender Bias in Predictive Algorithms using eXplainable AI [1.9798034349981162]
Predictive algorithms have a powerful potential to offer benefits in areas as varied as medicine or education.
They can inherit the bias and prejudices present in humans.
The outcomes can systematically repeat errors that create unfair results.
arXiv Detail & Related papers (2022-03-19T07:47:45Z) - Are Commercial Face Detection Models as Biased as Academic Models? [64.71318433419636]
We compare academic and commercial face detection systems, specifically examining robustness to noise.
We find that state-of-the-art academic face detection models exhibit demographic disparities in their noise robustness.
We conclude that commercial models are always as biased or more biased than an academic model.
arXiv Detail & Related papers (2022-01-25T02:21:42Z) - Anatomizing Bias in Facial Analysis [86.79402670904338]
Existing facial analysis systems have been shown to yield biased results against certain demographic subgroups.
It has become imperative to ensure that these systems do not discriminate based on gender, identity, or skin tone of individuals.
This has led to research in the identification and mitigation of bias in AI systems.
arXiv Detail & Related papers (2021-12-13T09:51:13Z) - Comparing Human and Machine Bias in Face Recognition [46.170389064229354]
We release improvements to the LFW and CelebA datasets which will enable future researchers to obtain measurements of algorithmic bias.
We also use these new data to develop a series of challenging facial identification and verification questions.
We find that both computer models and human survey participants perform significantly better at the verification task.
arXiv Detail & Related papers (2021-10-15T22:26:20Z) - Affect Analysis in-the-wild: Valence-Arousal, Expressions, Action Units
and a Unified Framework [83.21732533130846]
The paper focuses on large in-the-wild databases, i.e., Aff-Wild and Aff-Wild2.
It presents the design of two classes of deep neural networks trained with these databases.
A novel multi-task and holistic framework is presented which is able to jointly learn and effectively generalize and perform affect recognition.
arXiv Detail & Related papers (2021-03-29T17:36:20Z) - Emotion pattern detection on facial videos using functional statistics [62.997667081978825]
We propose a technique based on Functional ANOVA to extract significant patterns of face muscles movements.
We determine if there are time-related differences on expressions among emotional groups by using a functional F-test.
arXiv Detail & Related papers (2021-03-01T08:31:08Z) - Uncovering the Bias in Facial Expressions [0.0]
We train a neural network for Action Unit classification and analyze its performance quantitatively based on its accuracy and qualitatively based on heatmaps.
A structured review of our results indicates that we are able to detect bias.
Even though we cannot conclude from our results that lower classification performance emerged solely from gender and skin color bias, these biases must be addressed.
arXiv Detail & Related papers (2020-11-23T10:20:10Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.