Towards Gender-Neutral Face Descriptors for Mitigating Bias in Face
Recognition
- URL: http://arxiv.org/abs/2006.07845v2
- Date: Thu, 17 Sep 2020 06:25:31 GMT
- Title: Towards Gender-Neutral Face Descriptors for Mitigating Bias in Face
Recognition
- Authors: Prithviraj Dhar, Joshua Gleason, Hossein Souri, Carlos D. Castillo,
Rama Chellappa
- Abstract summary: State-of-the-art deep networks implicitly encode gender information while being trained for face recognition.
Gender is often viewed as an important attribute with respect to identifying faces.
We present a novel Adversarial Gender De-biasing algorithm (AGENDA)' to reduce the gender information present in face descriptors.
- Score: 51.856693288834975
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: State-of-the-art deep networks implicitly encode gender information while
being trained for face recognition. Gender is often viewed as an important
attribute with respect to identifying faces. However, the implicit encoding of
gender information in face descriptors has two major issues: (a.) It makes the
descriptors susceptible to privacy leakage, i.e. a malicious agent can be
trained to predict the face gender from such descriptors. (b.) It appears to
contribute to gender bias in face recognition, i.e. we find a significant
difference in the recognition accuracy of DCNNs on male and female faces.
Therefore, we present a novel `Adversarial Gender De-biasing algorithm
(AGENDA)' to reduce the gender information present in face descriptors obtained
from previously trained face recognition networks. We show that AGENDA
significantly reduces gender predictability of face descriptors. Consequently,
we are also able to reduce gender bias in face verification while maintaining
reasonable recognition performance.
Related papers
- VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Are Face Detection Models Biased? [69.68854430664399]
We investigate possible bias in the domain of face detection through facial region localization.
Most existing face detection datasets lack suitable annotation for such analysis.
We observe a high disparity in detection accuracies across gender and skin-tone, and interplay of confounding factors beyond demography.
arXiv Detail & Related papers (2022-11-07T14:27:55Z) - Explaining Bias in Deep Face Recognition via Image Characteristics [9.569575076277523]
We evaluate ten state-of-the-art face recognition models, comparing their fairness in terms of security and usability on two data sets.
We then analyze the impact of image characteristics on models performance.
arXiv Detail & Related papers (2022-08-23T17:18:23Z) - The Gender Gap in Face Recognition Accuracy Is a Hairy Problem [8.768049933358968]
We first demonstrate that female and male hairstyles have important differences that impact face recognition accuracy.
We then demonstrate that when the data used to estimate recognition accuracy is balanced across gender for how hairstyles occlude the face, the initially observed gender gap in accuracy largely disappears.
arXiv Detail & Related papers (2022-06-10T04:32:47Z) - ArcFace Knows the Gender, Too! [0.0]
Instead of defining a new model for gender classification, this paper uses ArcFace features to determine gender.
Discriminative methods such as Support Vector Machine (SVM), Linear Discriminant, and Logistic Regression well demonstrate that the features extracted from the ArcFace create a remarkable distinction between the gender classes.
arXiv Detail & Related papers (2021-12-19T10:00:36Z) - Learning Fair Face Representation With Progressive Cross Transformer [79.73754444296213]
We propose a progressive cross transformer (PCT) method for fair face recognition.
We show that PCT is capable of mitigating bias in face recognition while achieving state-of-the-art FR performance.
arXiv Detail & Related papers (2021-08-11T01:31:14Z) - PASS: Protected Attribute Suppression System for Mitigating Bias in Face
Recognition [55.858374644761525]
Face recognition networks encode information about sensitive attributes while being trained for identity classification.
Existing bias mitigation approaches require end-to-end training and are unable to achieve high verification accuracy.
We present a descriptors-based adversarial de-biasing approach called Protected Attribute Suppression System ( PASS)'
Pass can be trained on top of descriptors obtained from any previously trained high-performing network to classify identities and simultaneously reduce encoding of sensitive attributes.
arXiv Detail & Related papers (2021-08-09T00:39:22Z) - Investigating the Impact of Inclusion in Face Recognition Training Data
on Individual Face Identification [93.5538147928669]
We audit ArcFace, a state-of-the-art, open source face recognition system, in a large-scale face identification experiment with more than one million distractor images.
We find a Rank-1 face identification accuracy of 79.71% for individuals present in the model's training data and an accuracy of 75.73% for those not present.
arXiv Detail & Related papers (2020-01-09T15:50:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.