The Impact of Racial Distribution in Training Data on Face Recognition
Bias: A Closer Look
- URL: http://arxiv.org/abs/2211.14498v1
- Date: Sat, 26 Nov 2022 07:03:24 GMT
- Title: The Impact of Racial Distribution in Training Data on Face Recognition
Bias: A Closer Look
- Authors: Manideep Kolla, Aravinth Savadamuthu
- Abstract summary: We study the effect of racial distribution in the training data on the performance of face recognition models.
We analyze these trained models using accuracy metrics, clustering metrics, UMAP projections, face quality, and decision thresholds.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face recognition algorithms, when used in the real world, can be very useful,
but they can also be dangerous when biased toward certain demographics. So, it
is essential to understand how these algorithms are trained and what factors
affect their accuracy and fairness to build better ones. In this study, we shed
some light on the effect of racial distribution in the training data on the
performance of face recognition models. We conduct 16 different experiments
with varying racial distributions of faces in the training data. We analyze
these trained models using accuracy metrics, clustering metrics, UMAP
projections, face quality, and decision thresholds. We show that a uniform
distribution of races in the training datasets alone does not guarantee
bias-free face recognition algorithms and how factors like face image quality
play a crucial role. We also study the correlation between the clustering
metrics and bias to understand whether clustering is a good indicator of bias.
Finally, we introduce a metric called racial gradation to study the inter and
intra race correlation in facial features and how they affect the learning
ability of the face recognition models. With this study, we try to bring more
understanding to an essential element of face recognition training, the data. A
better understanding of the impact of training data on the bias of face
recognition algorithms will aid in creating better datasets and, in turn,
better face recognition systems.
Related papers
- FineFACE: Fair Facial Attribute Classification Leveraging Fine-grained Features [3.9440964696313485]
Research highlights the presence of demographic bias in automated facial attribute classification algorithms.
Existing bias mitigation techniques typically require demographic annotations and often obtain a trade-off between fairness and accuracy.
This paper proposes a novel approach to fair facial attribute classification by framing it as a fine-grained classification problem.
arXiv Detail & Related papers (2024-08-29T20:08:22Z) - Toward Fairer Face Recognition Datasets [69.04239222633795]
Face recognition and verification are computer vision tasks whose performance has progressed with the introduction of deep representations.
Ethical, legal, and technical challenges due to the sensitive character of face data and biases in real training datasets hinder their development.
We promote fairness by introducing a demographic attributes balancing mechanism in generated training datasets.
arXiv Detail & Related papers (2024-06-24T12:33:21Z) - Improving Fairness using Vision-Language Driven Image Augmentation [60.428157003498995]
Fairness is crucial when training a deep-learning discriminative model, especially in the facial domain.
Models tend to correlate specific characteristics (such as age and skin color) with unrelated attributes (downstream tasks)
This paper proposes a method to mitigate these correlations to improve fairness.
arXiv Detail & Related papers (2023-11-02T19:51:10Z) - Addressing Racial Bias in Facial Emotion Recognition [1.4896509623302834]
This study focuses on analyzing racial bias by sub-sampling training sets with varied racial distributions.
Our findings indicate that smaller datasets with posed faces improve on both fairness and performance metrics as the simulations approach racial balance.
In larger datasets with greater facial variation, fairness metrics generally remain constant, suggesting that racial balance by itself is insufficient to achieve parity in test performance across different racial groups.
arXiv Detail & Related papers (2023-08-09T03:03:35Z) - MixFairFace: Towards Ultimate Fairness via MixFair Adapter in Face
Recognition [37.756287362799945]
We argue that the commonly used attribute-based fairness metric is not appropriate for face recognition.
We propose a new evaluation protocol to fairly evaluate the fairness performance of different approaches.
Our MixFairFace approach achieves state-of-the-art fairness performance on all benchmark datasets.
arXiv Detail & Related papers (2022-11-28T09:47:21Z) - Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face
Recognition [107.58227666024791]
Face recognition systems are widely deployed in safety-critical applications, including law enforcement.
They exhibit bias across a range of socio-demographic dimensions, such as gender and race.
Previous works on bias mitigation largely focused on pre-processing the training data.
arXiv Detail & Related papers (2022-10-18T15:46:05Z) - Meta Balanced Network for Fair Face Recognition [51.813457201437195]
We systematically and scientifically study bias from both data and algorithm aspects.
We propose a novel meta-learning algorithm, called Meta Balanced Network (MBN), which learns adaptive margins in large margin loss.
Extensive experiments show that MBN successfully mitigates bias and learns more balanced performance for people with different skin tones in face recognition.
arXiv Detail & Related papers (2022-05-13T10:25:44Z) - Learning Fair Face Representation With Progressive Cross Transformer [79.73754444296213]
We propose a progressive cross transformer (PCT) method for fair face recognition.
We show that PCT is capable of mitigating bias in face recognition while achieving state-of-the-art FR performance.
arXiv Detail & Related papers (2021-08-11T01:31:14Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - SensitiveLoss: Improving Accuracy and Fairness of Face Representations
with Discrimination-Aware Deep Learning [17.088716485755917]
We propose a discrimination-aware learning method to improve accuracy and fairness of biased face recognition algorithms.
We experimentally show that learning processes based on the most used face databases have led to popular pre-trained deep face models that present a strong algorithmic discrimination.
Our approach works as an add-on to pre-trained networks and is used to improve their performance in terms of average accuracy and fairness.
arXiv Detail & Related papers (2020-04-22T10:32:16Z) - Exploring Racial Bias within Face Recognition via per-subject
Adversarially-Enabled Data Augmentation [15.924281804465252]
We propose a novel adversarial derived data augmentation methodology that aims to enable dataset balance at a per-subject level.
Our aim is to automatically construct a synthesised dataset by transforming facial images across varying racial domains.
In a side-by-side comparison, we show the positive impact our proposed technique can have on the recognition performance for (racial) minority groups.
arXiv Detail & Related papers (2020-04-19T19:46:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.