Does Face Recognition Error Echo Gender Classification Error?
- URL: http://arxiv.org/abs/2104.13803v1
- Date: Wed, 28 Apr 2021 14:43:31 GMT
- Title: Does Face Recognition Error Echo Gender Classification Error?
- Authors: Ying Qiu, V\'itor Albiero, Michael C. King, Kevin W. Bowyer
- Abstract summary: We analyze results from three different gender classification algorithms, and two face recognition algorithms.
For impostor image pairs, our results show that pairs in which one image has a gender classification error have a better impostor distribution.
For genuine image pairs, our results show that individuals whose images have a mix of correct and incorrect gender classification have a worse genuine distribution.
- Score: 9.176056742068813
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper is the first to explore the question of whether images that are
classified incorrectly by a face analytics algorithm (e.g., gender
classification) are any more or less likely to participate in an image pair
that results in a face recognition error. We analyze results from three
different gender classification algorithms (one open-source and two
commercial), and two face recognition algorithms (one open-source and one
commercial), on image sets representing four demographic groups
(African-American female and male, Caucasian female and male). For impostor
image pairs, our results show that pairs in which one image has a gender
classification error have a better impostor distribution than pairs in which
both images have correct gender classification, and so are less likely to
generate a false match error. For genuine image pairs, our results show that
individuals whose images have a mix of correct and incorrect gender
classification have a worse genuine distribution (increased false non-match
rate) compared to individuals whose images all have correct gender
classification. Thus, compared to images that generate correct gender
classification, images that generate gender classification errors do generate a
different pattern of recognition errors, both better (false match) and worse
(false non-match).
Related papers
- Classes Are Not Equal: An Empirical Study on Image Recognition Fairness [100.36114135663836]
We experimentally demonstrate that classes are not equal and the fairness issue is prevalent for image classification models across various datasets.
Our findings reveal that models tend to exhibit greater prediction biases for classes that are more challenging to recognize.
Data augmentation and representation learning algorithms improve overall performance by promoting fairness to some degree in image classification.
arXiv Detail & Related papers (2024-02-28T07:54:50Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - The Gender Gap in Face Recognition Accuracy Is a Hairy Problem [8.768049933358968]
We first demonstrate that female and male hairstyles have important differences that impact face recognition accuracy.
We then demonstrate that when the data used to estimate recognition accuracy is balanced across gender for how hairstyles occlude the face, the initially observed gender gap in accuracy largely disappears.
arXiv Detail & Related papers (2022-06-10T04:32:47Z) - Are Commercial Face Detection Models as Biased as Academic Models? [64.71318433419636]
We compare academic and commercial face detection systems, specifically examining robustness to noise.
We find that state-of-the-art academic face detection models exhibit demographic disparities in their noise robustness.
We conclude that commercial models are always as biased or more biased than an academic model.
arXiv Detail & Related papers (2022-01-25T02:21:42Z) - Gendered Differences in Face Recognition Accuracy Explained by
Hairstyles, Makeup, and Facial Morphology [11.50297186426025]
There is consensus in the research literature that face recognition accuracy is lower for females.
Controlling for equal amount of visible face in the test images mitigates the apparent higher false non-match rate for females.
Additional analysis shows that makeup-balanced datasets further improves females to achieve lower false non-match rates.
arXiv Detail & Related papers (2021-12-29T17:07:33Z) - Are Gender-Neutral Queries Really Gender-Neutral? Mitigating Gender Bias
in Image Search [8.730027941735804]
We study a unique gender bias in image search.
The search images are often gender-imbalanced for gender-neutral natural language queries.
We introduce two novel debiasing approaches.
arXiv Detail & Related papers (2021-09-12T04:47:33Z) - Unravelling the Effect of Image Distortions for Biased Prediction of
Pre-trained Face Recognition Models [86.79402670904338]
We evaluate the performance of four state-of-the-art deep face recognition models in the presence of image distortions.
We have observed that image distortions have a relationship with the performance gap of the model across different subgroups.
arXiv Detail & Related papers (2021-08-14T16:49:05Z) - Is Face Recognition Sexist? No, Gendered Hairstyles and Biology Are [10.727923887885398]
We present the first experimental analysis to identify major causes of lower face recognition accuracy for females.
Controlling for equal amount of visible face in the test images reverses the apparent higher false non-match rate for females.
Also, principal component analysis indicates that images of two different females are inherently more similar than of two different males.
arXiv Detail & Related papers (2020-08-16T20:29:05Z) - Mitigating Gender Bias in Captioning Systems [56.25457065032423]
Most captioning models learn gender bias, leading to high gender prediction errors, especially for women.
We propose a new Guided Attention Image Captioning model (GAIC) which provides self-guidance on visual attention to encourage the model to capture correct gender visual evidence.
arXiv Detail & Related papers (2020-06-15T12:16:19Z) - Towards Gender-Neutral Face Descriptors for Mitigating Bias in Face
Recognition [51.856693288834975]
State-of-the-art deep networks implicitly encode gender information while being trained for face recognition.
Gender is often viewed as an important attribute with respect to identifying faces.
We present a novel Adversarial Gender De-biasing algorithm (AGENDA)' to reduce the gender information present in face descriptors.
arXiv Detail & Related papers (2020-06-14T08:54:03Z) - Analysis of Gender Inequality In Face Recognition Accuracy [11.6168015920729]
We show that accuracy is lower for women due to the combination of (1) the impostor distribution for women having a skew toward higher similarity scores, and (2) the genuine distribution for women having a skew toward lower similarity scores.
We show that this phenomenon of the impostor and genuine distributions for women shifting closer towards each other is general across datasets of African-American, Caucasian, and Asian faces.
arXiv Detail & Related papers (2020-01-31T21:32:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.