Analysis of Gender Inequality In Face Recognition Accuracy
- URL: http://arxiv.org/abs/2002.00065v1
- Date: Fri, 31 Jan 2020 21:32:53 GMT
- Title: Analysis of Gender Inequality In Face Recognition Accuracy
- Authors: V\'itor Albiero, Krishnapriya K.S., Kushal Vangara, Kai Zhang, Michael
C. King, and Kevin W. Bowyer
- Abstract summary: We show that accuracy is lower for women due to the combination of (1) the impostor distribution for women having a skew toward higher similarity scores, and (2) the genuine distribution for women having a skew toward lower similarity scores.
We show that this phenomenon of the impostor and genuine distributions for women shifting closer towards each other is general across datasets of African-American, Caucasian, and Asian faces.
- Score: 11.6168015920729
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a comprehensive analysis of how and why face recognition accuracy
differs between men and women. We show that accuracy is lower for women due to
the combination of (1) the impostor distribution for women having a skew toward
higher similarity scores, and (2) the genuine distribution for women having a
skew toward lower similarity scores. We show that this phenomenon of the
impostor and genuine distributions for women shifting closer towards each other
is general across datasets of African-American, Caucasian, and Asian faces. We
show that the distribution of facial expressions may differ between
male/female, but that the accuracy difference persists for image subsets rated
confidently as neutral expression. The accuracy difference also persists for
image subsets rated as close to zero pitch angle. Even when removing images
with forehead partially occluded by hair/hat, the same impostor/genuine
accuracy difference persists. We show that the female genuine distribution
improves when only female images without facial cosmetics are used, but that
the female impostor distribution also degrades at the same time. Lastly, we
show that the accuracy difference persists even if a state-of-the-art deep
learning method is trained from scratch using training data explicitly balanced
between male and female images and subjects.
Related papers
- Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - The Male CEO and the Female Assistant: Evaluation and Mitigation of Gender Biases in Text-To-Image Generation of Dual Subjects [58.27353205269664]
We propose the Paired Stereotype Test (PST) framework, which queries T2I models to depict two individuals assigned with male-stereotyped and female-stereotyped social identities.
PST queries T2I models to depict two individuals assigned with male-stereotyped and female-stereotyped social identities.
Using PST, we evaluate two aspects of gender biases -- the well-known bias in gendered occupation and a novel aspect: bias in organizational power.
arXiv Detail & Related papers (2024-02-16T21:32:27Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Auditing Gender Presentation Differences in Text-to-Image Models [54.16959473093973]
We study how gender is presented differently in text-to-image models.
By probing gender indicators in the input text, we quantify the frequency differences of presentation-centric attributes.
We propose an automatic method to estimate such differences.
arXiv Detail & Related papers (2023-02-07T18:52:22Z) - The Gender Gap in Face Recognition Accuracy Is a Hairy Problem [8.768049933358968]
We first demonstrate that female and male hairstyles have important differences that impact face recognition accuracy.
We then demonstrate that when the data used to estimate recognition accuracy is balanced across gender for how hairstyles occlude the face, the initially observed gender gap in accuracy largely disappears.
arXiv Detail & Related papers (2022-06-10T04:32:47Z) - On Disentangled and Locally Fair Representations [95.6635227371479]
We study the problem of performing classification in a manner that is fair for sensitive groups, such as race and gender.
We learn a locally fair representation, such that, under the learned representation, the neighborhood of each sample is balanced in terms of the sensitive attribute.
arXiv Detail & Related papers (2022-05-05T14:26:50Z) - Gendered Differences in Face Recognition Accuracy Explained by
Hairstyles, Makeup, and Facial Morphology [11.50297186426025]
There is consensus in the research literature that face recognition accuracy is lower for females.
Controlling for equal amount of visible face in the test images mitigates the apparent higher false non-match rate for females.
Additional analysis shows that makeup-balanced datasets further improves females to achieve lower false non-match rates.
arXiv Detail & Related papers (2021-12-29T17:07:33Z) - Does Face Recognition Error Echo Gender Classification Error? [9.176056742068813]
We analyze results from three different gender classification algorithms, and two face recognition algorithms.
For impostor image pairs, our results show that pairs in which one image has a gender classification error have a better impostor distribution.
For genuine image pairs, our results show that individuals whose images have a mix of correct and incorrect gender classification have a worse genuine distribution.
arXiv Detail & Related papers (2021-04-28T14:43:31Z) - Is Face Recognition Sexist? No, Gendered Hairstyles and Biology Are [10.727923887885398]
We present the first experimental analysis to identify major causes of lower face recognition accuracy for females.
Controlling for equal amount of visible face in the test images reverses the apparent higher false non-match rate for females.
Also, principal component analysis indicates that images of two different females are inherently more similar than of two different males.
arXiv Detail & Related papers (2020-08-16T20:29:05Z) - Towards Gender-Neutral Face Descriptors for Mitigating Bias in Face
Recognition [51.856693288834975]
State-of-the-art deep networks implicitly encode gender information while being trained for face recognition.
Gender is often viewed as an important attribute with respect to identifying faces.
We present a novel Adversarial Gender De-biasing algorithm (AGENDA)' to reduce the gender information present in face descriptors.
arXiv Detail & Related papers (2020-06-14T08:54:03Z) - How Does Gender Balance In Training Data Affect Face Recognition
Accuracy? [12.362029427868206]
It is often speculated that lower accuracy for women is caused by under-representation in the training data.
This work investigates female under-representation in the training data is truly the cause of lower accuracy for females on test data.
arXiv Detail & Related papers (2020-02-07T18:11:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.