Gendered Differences in Face Recognition Accuracy Explained by
Hairstyles, Makeup, and Facial Morphology
- URL: http://arxiv.org/abs/2112.14656v1
- Date: Wed, 29 Dec 2021 17:07:33 GMT
- Title: Gendered Differences in Face Recognition Accuracy Explained by
Hairstyles, Makeup, and Facial Morphology
- Authors: V\'itor Albiero, Kai Zhang, Michael C. King, Kevin W. Bowyer
- Abstract summary: There is consensus in the research literature that face recognition accuracy is lower for females.
Controlling for equal amount of visible face in the test images mitigates the apparent higher false non-match rate for females.
Additional analysis shows that makeup-balanced datasets further improves females to achieve lower false non-match rates.
- Score: 11.50297186426025
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Media reports have accused face recognition of being ''biased'', ''sexist''
and ''racist''. There is consensus in the research literature that face
recognition accuracy is lower for females, who often have both a higher false
match rate and a higher false non-match rate. However, there is little
published research aimed at identifying the cause of lower accuracy for
females. For instance, the 2019 Face Recognition Vendor Test that documents
lower female accuracy across a broad range of algorithms and datasets also
lists ''Analyze cause and effect'' under the heading ''What we did not do''. We
present the first experimental analysis to identify major causes of lower face
recognition accuracy for females on datasets where previous research has
observed this result. Controlling for equal amount of visible face in the test
images mitigates the apparent higher false non-match rate for females.
Additional analysis shows that makeup-balanced datasets further improves
females to achieve lower false non-match rates. Finally, a clustering
experiment suggests that images of two different females are inherently more
similar than of two different males, potentially accounting for a difference in
false match rates.
Related papers
- Are Face Detection Models Biased? [69.68854430664399]
We investigate possible bias in the domain of face detection through facial region localization.
Most existing face detection datasets lack suitable annotation for such analysis.
We observe a high disparity in detection accuracies across gender and skin-tone, and interplay of confounding factors beyond demography.
arXiv Detail & Related papers (2022-11-07T14:27:55Z) - The Gender Gap in Face Recognition Accuracy Is a Hairy Problem [8.768049933358968]
We first demonstrate that female and male hairstyles have important differences that impact face recognition accuracy.
We then demonstrate that when the data used to estimate recognition accuracy is balanced across gender for how hairstyles occlude the face, the initially observed gender gap in accuracy largely disappears.
arXiv Detail & Related papers (2022-06-10T04:32:47Z) - Are Commercial Face Detection Models as Biased as Academic Models? [64.71318433419636]
We compare academic and commercial face detection systems, specifically examining robustness to noise.
We find that state-of-the-art academic face detection models exhibit demographic disparities in their noise robustness.
We conclude that commercial models are always as biased or more biased than an academic model.
arXiv Detail & Related papers (2022-01-25T02:21:42Z) - Comparing Human and Machine Bias in Face Recognition [46.170389064229354]
We release improvements to the LFW and CelebA datasets which will enable future researchers to obtain measurements of algorithmic bias.
We also use these new data to develop a series of challenging facial identification and verification questions.
We find that both computer models and human survey participants perform significantly better at the verification task.
arXiv Detail & Related papers (2021-10-15T22:26:20Z) - Unravelling the Effect of Image Distortions for Biased Prediction of
Pre-trained Face Recognition Models [86.79402670904338]
We evaluate the performance of four state-of-the-art deep face recognition models in the presence of image distortions.
We have observed that image distortions have a relationship with the performance gap of the model across different subgroups.
arXiv Detail & Related papers (2021-08-14T16:49:05Z) - Does Face Recognition Error Echo Gender Classification Error? [9.176056742068813]
We analyze results from three different gender classification algorithms, and two face recognition algorithms.
For impostor image pairs, our results show that pairs in which one image has a gender classification error have a better impostor distribution.
For genuine image pairs, our results show that individuals whose images have a mix of correct and incorrect gender classification have a worse genuine distribution.
arXiv Detail & Related papers (2021-04-28T14:43:31Z) - Is Face Recognition Sexist? No, Gendered Hairstyles and Biology Are [10.727923887885398]
We present the first experimental analysis to identify major causes of lower face recognition accuracy for females.
Controlling for equal amount of visible face in the test images reverses the apparent higher false non-match rate for females.
Also, principal component analysis indicates that images of two different females are inherently more similar than of two different males.
arXiv Detail & Related papers (2020-08-16T20:29:05Z) - Mitigating Gender Bias in Captioning Systems [56.25457065032423]
Most captioning models learn gender bias, leading to high gender prediction errors, especially for women.
We propose a new Guided Attention Image Captioning model (GAIC) which provides self-guidance on visual attention to encourage the model to capture correct gender visual evidence.
arXiv Detail & Related papers (2020-06-15T12:16:19Z) - Towards Gender-Neutral Face Descriptors for Mitigating Bias in Face
Recognition [51.856693288834975]
State-of-the-art deep networks implicitly encode gender information while being trained for face recognition.
Gender is often viewed as an important attribute with respect to identifying faces.
We present a novel Adversarial Gender De-biasing algorithm (AGENDA)' to reduce the gender information present in face descriptors.
arXiv Detail & Related papers (2020-06-14T08:54:03Z) - How Does Gender Balance In Training Data Affect Face Recognition
Accuracy? [12.362029427868206]
It is often speculated that lower accuracy for women is caused by under-representation in the training data.
This work investigates female under-representation in the training data is truly the cause of lower accuracy for females on test data.
arXiv Detail & Related papers (2020-02-07T18:11:01Z) - Analysis of Gender Inequality In Face Recognition Accuracy [11.6168015920729]
We show that accuracy is lower for women due to the combination of (1) the impostor distribution for women having a skew toward higher similarity scores, and (2) the genuine distribution for women having a skew toward lower similarity scores.
We show that this phenomenon of the impostor and genuine distributions for women shifting closer towards each other is general across datasets of African-American, Caucasian, and Asian faces.
arXiv Detail & Related papers (2020-01-31T21:32:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.