The Gender Gap in Face Recognition Accuracy Is a Hairy Problem
- URL: http://arxiv.org/abs/2206.04867v1
- Date: Fri, 10 Jun 2022 04:32:47 GMT
- Title: The Gender Gap in Face Recognition Accuracy Is a Hairy Problem
- Authors: Aman Bhatta, V\'itor Albiero, Kevin W. Bowyer, Michael C. King
- Abstract summary: We first demonstrate that female and male hairstyles have important differences that impact face recognition accuracy.
We then demonstrate that when the data used to estimate recognition accuracy is balanced across gender for how hairstyles occlude the face, the initially observed gender gap in accuracy largely disappears.
- Score: 8.768049933358968
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is broadly accepted that there is a "gender gap" in face recognition
accuracy, with females having higher false match and false non-match rates.
However, relatively little is known about the cause(s) of this gender gap. Even
the recent NIST report on demographic effects lists "analyze cause and effect"
under "what we did not do". We first demonstrate that female and male
hairstyles have important differences that impact face recognition accuracy. In
particular, compared to females, male facial hair contributes to creating a
greater average difference in appearance between different male faces. We then
demonstrate that when the data used to estimate recognition accuracy is
balanced across gender for how hairstyles occlude the face, the initially
observed gender gap in accuracy largely disappears. We show this result for two
different matchers, and analyzing images of Caucasians and of
African-Americans. These results suggest that future research on demographic
variation in accuracy should include a check for balanced quality of the test
data as part of the problem formulation. To promote reproducible research,
matchers, attribute classifiers, and datasets used in this research are/will be
publicly available.
Related papers
- Gender Stereotyping Impact in Facial Expression Recognition [1.5340540198612824]
In recent years, machine learning-based models have become the most popular approach to Facial Expression Recognition (FER)
In publicly available FER datasets, apparent gender representation is usually mostly balanced, but their representation in the individual label is not.
We generate derivative datasets with different amounts of stereotypical bias by altering the gender proportions of certain labels.
We observe a discrepancy in the recognition of certain emotions between genders of up to $29 %$ under the worst bias conditions.
arXiv Detail & Related papers (2022-10-11T10:52:23Z) - A Deep Dive into Dataset Imbalance and Bias in Face Identification [49.210042420757894]
Media portrayals often center imbalance as the main source of bias in automated face recognition systems.
Previous studies of data imbalance in FR have focused exclusively on the face verification setting.
This work thoroughly explores the effects of each kind of imbalance possible in face identification, and discuss other factors which may impact bias in this setting.
arXiv Detail & Related papers (2022-03-15T20:23:13Z) - Are Commercial Face Detection Models as Biased as Academic Models? [64.71318433419636]
We compare academic and commercial face detection systems, specifically examining robustness to noise.
We find that state-of-the-art academic face detection models exhibit demographic disparities in their noise robustness.
We conclude that commercial models are always as biased or more biased than an academic model.
arXiv Detail & Related papers (2022-01-25T02:21:42Z) - Gendered Differences in Face Recognition Accuracy Explained by
Hairstyles, Makeup, and Facial Morphology [11.50297186426025]
There is consensus in the research literature that face recognition accuracy is lower for females.
Controlling for equal amount of visible face in the test images mitigates the apparent higher false non-match rate for females.
Additional analysis shows that makeup-balanced datasets further improves females to achieve lower false non-match rates.
arXiv Detail & Related papers (2021-12-29T17:07:33Z) - Comparing Human and Machine Bias in Face Recognition [46.170389064229354]
We release improvements to the LFW and CelebA datasets which will enable future researchers to obtain measurements of algorithmic bias.
We also use these new data to develop a series of challenging facial identification and verification questions.
We find that both computer models and human survey participants perform significantly better at the verification task.
arXiv Detail & Related papers (2021-10-15T22:26:20Z) - Balancing Biases and Preserving Privacy on Balanced Faces in the Wild [50.915684171879036]
There are demographic biases present in current facial recognition (FR) models.
We introduce our Balanced Faces in the Wild dataset to measure these biases across different ethnic and gender subgroups.
We find that relying on a single score threshold to differentiate between genuine and imposters sample pairs leads to suboptimal results.
We propose a novel domain adaptation learning scheme that uses facial features extracted from state-of-the-art neural networks.
arXiv Detail & Related papers (2021-03-16T15:05:49Z) - Understanding Fairness of Gender Classification Algorithms Across
Gender-Race Groups [0.8594140167290097]
The aim of this paper is to investigate the differential performance of the gender classification algorithms across gender-race groups.
For all the algorithms used, Black females (Black race in general) always obtained the least accuracy rates.
Middle Eastern males and Latino females obtained higher accuracy rates most of the time.
arXiv Detail & Related papers (2020-09-24T04:56:10Z) - Is Face Recognition Sexist? No, Gendered Hairstyles and Biology Are [10.727923887885398]
We present the first experimental analysis to identify major causes of lower face recognition accuracy for females.
Controlling for equal amount of visible face in the test images reverses the apparent higher false non-match rate for females.
Also, principal component analysis indicates that images of two different females are inherently more similar than of two different males.
arXiv Detail & Related papers (2020-08-16T20:29:05Z) - Mitigating Gender Bias in Captioning Systems [56.25457065032423]
Most captioning models learn gender bias, leading to high gender prediction errors, especially for women.
We propose a new Guided Attention Image Captioning model (GAIC) which provides self-guidance on visual attention to encourage the model to capture correct gender visual evidence.
arXiv Detail & Related papers (2020-06-15T12:16:19Z) - Towards Gender-Neutral Face Descriptors for Mitigating Bias in Face
Recognition [51.856693288834975]
State-of-the-art deep networks implicitly encode gender information while being trained for face recognition.
Gender is often viewed as an important attribute with respect to identifying faces.
We present a novel Adversarial Gender De-biasing algorithm (AGENDA)' to reduce the gender information present in face descriptors.
arXiv Detail & Related papers (2020-06-14T08:54:03Z) - Analysis of Gender Inequality In Face Recognition Accuracy [11.6168015920729]
We show that accuracy is lower for women due to the combination of (1) the impostor distribution for women having a skew toward higher similarity scores, and (2) the genuine distribution for women having a skew toward lower similarity scores.
We show that this phenomenon of the impostor and genuine distributions for women shifting closer towards each other is general across datasets of African-American, Caucasian, and Asian faces.
arXiv Detail & Related papers (2020-01-31T21:32:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.