FairFace Challenge at ECCV 2020: Analyzing Bias in Face Recognition
- URL: http://arxiv.org/abs/2009.07838v2
- Date: Wed, 2 Dec 2020 17:17:48 GMT
- Title: FairFace Challenge at ECCV 2020: Analyzing Bias in Face Recognition
- Authors: Tom\'a\v{s} Sixta, Julio C. S. Jacques Junior, Pau Buch-Cardona, Neil
M. Robertson, Eduard Vazquez, Sergio Escalera
- Abstract summary: The aim of the challenge was to evaluate accuracy and bias in gender and skin colour of submitted algorithms.
The dataset is not balanced, which simulates a real world scenario where AI-based models supposed to present fair outcomes are trained and evaluated on imbalanced data.
The analysis of top-10 teams shows higher false positive rates (and lower false negative rates) for females with dark skin tone.
- Score: 26.49981022316179
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work summarizes the 2020 ChaLearn Looking at People Fair Face
Recognition and Analysis Challenge and provides a description of the
top-winning solutions and analysis of the results. The aim of the challenge was
to evaluate accuracy and bias in gender and skin colour of submitted algorithms
on the task of 1:1 face verification in the presence of other confounding
attributes. Participants were evaluated using an in-the-wild dataset based on
reannotated IJB-C, further enriched by 12.5K new images and additional labels.
The dataset is not balanced, which simulates a real world scenario where
AI-based models supposed to present fair outcomes are trained and evaluated on
imbalanced data. The challenge attracted 151 participants, who made more than
1.8K submissions in total. The final phase of the challenge attracted 36 active
teams out of which 10 exceeded 0.999 AUC-ROC while achieving very low scores in
the proposed bias metrics. Common strategies by the participants were face
pre-processing, homogenization of data distributions, the use of bias aware
loss functions and ensemble models. The analysis of top-10 teams shows higher
false positive rates (and lower false negative rates) for females with dark
skin tone as well as the potential of eyeglasses and young age to increase the
false positive rates too.
Related papers
- Toward Fairer Face Recognition Datasets [69.04239222633795]
Face recognition and verification are computer vision tasks whose performance has progressed with the introduction of deep representations.
Ethical, legal, and technical challenges due to the sensitive character of face data and biases in real training datasets hinder their development.
We promote fairness by introducing a demographic attributes balancing mechanism in generated training datasets.
arXiv Detail & Related papers (2024-06-24T12:33:21Z) - Addressing Racial Bias in Facial Emotion Recognition [1.4896509623302834]
This study focuses on analyzing racial bias by sub-sampling training sets with varied racial distributions.
Our findings indicate that smaller datasets with posed faces improve on both fairness and performance metrics as the simulations approach racial balance.
In larger datasets with greater facial variation, fairness metrics generally remain constant, suggesting that racial balance by itself is insufficient to achieve parity in test performance across different racial groups.
arXiv Detail & Related papers (2023-08-09T03:03:35Z) - Meta Balanced Network for Fair Face Recognition [51.813457201437195]
We systematically and scientifically study bias from both data and algorithm aspects.
We propose a novel meta-learning algorithm, called Meta Balanced Network (MBN), which learns adaptive margins in large margin loss.
Extensive experiments show that MBN successfully mitigates bias and learns more balanced performance for people with different skin tones in face recognition.
arXiv Detail & Related papers (2022-05-13T10:25:44Z) - Comparing Human and Machine Bias in Face Recognition [46.170389064229354]
We release improvements to the LFW and CelebA datasets which will enable future researchers to obtain measurements of algorithmic bias.
We also use these new data to develop a series of challenging facial identification and verification questions.
We find that both computer models and human survey participants perform significantly better at the verification task.
arXiv Detail & Related papers (2021-10-15T22:26:20Z) - Unravelling the Effect of Image Distortions for Biased Prediction of
Pre-trained Face Recognition Models [86.79402670904338]
We evaluate the performance of four state-of-the-art deep face recognition models in the presence of image distortions.
We have observed that image distortions have a relationship with the performance gap of the model across different subgroups.
arXiv Detail & Related papers (2021-08-14T16:49:05Z) - Balancing Biases and Preserving Privacy on Balanced Faces in the Wild [50.915684171879036]
There are demographic biases present in current facial recognition (FR) models.
We introduce our Balanced Faces in the Wild dataset to measure these biases across different ethnic and gender subgroups.
We find that relying on a single score threshold to differentiate between genuine and imposters sample pairs leads to suboptimal results.
We propose a novel domain adaptation learning scheme that uses facial features extracted from state-of-the-art neural networks.
arXiv Detail & Related papers (2021-03-16T15:05:49Z) - Investigating Bias in Deep Face Analysis: The KANFace Dataset and
Empirical Study [67.3961439193994]
We introduce the most comprehensive, large-scale dataset of facial images and videos to date.
The data are manually annotated in terms of identity, exact age, gender and kinship.
A method to debias network embeddings is introduced and tested on the proposed benchmarks.
arXiv Detail & Related papers (2020-05-15T00:14:39Z) - Post-Comparison Mitigation of Demographic Bias in Face Recognition Using
Fair Score Normalization [15.431761867166]
We propose a novel unsupervised fair score normalization approach to reduce the effect of bias in face recognition.
Our solution reduces demographic biases by up to 82.7% in the case when gender is considered.
In contrast to previous works, our fair normalization approach enhances the overall performance by up to 53.2% at false match rate of 0.001 and up to 82.9% at a false match rate of 0.00001.
arXiv Detail & Related papers (2020-02-10T08:17:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.