Post-Comparison Mitigation of Demographic Bias in Face Recognition Using
Fair Score Normalization
- URL: http://arxiv.org/abs/2002.03592v3
- Date: Thu, 5 Nov 2020 14:57:23 GMT
- Title: Post-Comparison Mitigation of Demographic Bias in Face Recognition Using
Fair Score Normalization
- Authors: Philipp Terh\"orst, Jan Niklas Kolf, Naser Damer, Florian
Kirchbuchner, Arjan Kuijper
- Abstract summary: We propose a novel unsupervised fair score normalization approach to reduce the effect of bias in face recognition.
Our solution reduces demographic biases by up to 82.7% in the case when gender is considered.
In contrast to previous works, our fair normalization approach enhances the overall performance by up to 53.2% at false match rate of 0.001 and up to 82.9% at a false match rate of 0.00001.
- Score: 15.431761867166
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Current face recognition systems achieve high progress on several benchmark
tests. Despite this progress, recent works showed that these systems are
strongly biased against demographic sub-groups. Consequently, an easily
integrable solution is needed to reduce the discriminatory effect of these
biased systems. Previous work mainly focused on learning less biased face
representations, which comes at the cost of a strongly degraded overall
recognition performance. In this work, we propose a novel unsupervised fair
score normalization approach that is specifically designed to reduce the effect
of bias in face recognition and subsequently lead to a significant overall
performance boost. Our hypothesis is built on the notation of individual
fairness by designing a normalization approach that leads to treating similar
individuals similarly. Experiments were conducted on three publicly available
datasets captured under controlled and in-the-wild circumstances. Results
demonstrate that our solution reduces demographic biases, e.g. by up to 82.7%
in the case when gender is considered. Moreover, it mitigates the bias more
consistently than existing works. In contrast to previous works, our fair
normalization approach enhances the overall performance by up to 53.2% at false
match rate of 0.001 and up to 82.9% at a false match rate of 0.00001.
Additionally, it is easily integrable into existing recognition systems and not
limited to face biometrics.
Related papers
- Improving Bias in Facial Attribute Classification: A Combined Impact of KL Divergence induced Loss Function and Dual Attention [3.5527561584422465]
Earlier systems often exhibited demographic bias, particularly in gender and racial classification, with lower accuracy for women and individuals with darker skin tones.
This paper presents a method using a dual attention mechanism with a pre-trained Inception-ResNet V1 model, enhanced by KL-divergence regularization and a cross-entropy loss function.
The experimental results show significant improvements in both fairness and classification accuracy, providing promising advances in addressing bias and enhancing the reliability of facial recognition systems.
arXiv Detail & Related papers (2024-10-15T01:29:09Z) - Score Normalization for Demographic Fairness in Face Recognition [16.421833444307232]
Well-known sample-centered score normalization techniques, Z-norm and T-norm, do not improve fairness for high-security operating points.
We extend the standard Z/T-norm to integrate demographic information in normalization.
We show that our techniques generally improve the overall fairness of five state-of-the-art pre-trained face recognition networks.
arXiv Detail & Related papers (2024-07-19T07:51:51Z) - Towards Fair Face Verification: An In-depth Analysis of Demographic
Biases [11.191375513738361]
Deep learning-based person identification and verification systems have remarkably improved in terms of accuracy in recent years.
However, such systems have been found to exhibit significant biases related to race, age, and gender.
This paper presents an in-depth analysis, with a particular emphasis on the intersectionality of these demographic factors.
arXiv Detail & Related papers (2023-07-19T14:49:14Z) - MixFairFace: Towards Ultimate Fairness via MixFair Adapter in Face
Recognition [37.756287362799945]
We argue that the commonly used attribute-based fairness metric is not appropriate for face recognition.
We propose a new evaluation protocol to fairly evaluate the fairness performance of different approaches.
Our MixFairFace approach achieves state-of-the-art fairness performance on all benchmark datasets.
arXiv Detail & Related papers (2022-11-28T09:47:21Z) - Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face
Recognition [107.58227666024791]
Face recognition systems are widely deployed in safety-critical applications, including law enforcement.
They exhibit bias across a range of socio-demographic dimensions, such as gender and race.
Previous works on bias mitigation largely focused on pre-processing the training data.
arXiv Detail & Related papers (2022-10-18T15:46:05Z) - Gradient Based Activations for Accurate Bias-Free Learning [22.264226961225003]
We show that a biased discriminator can actually be used to improve this bias-accuracy tradeoff.
Specifically, this is achieved by using a feature masking approach using the discriminator's gradients.
We show that this simple approach works well to reduce bias as well as improve accuracy significantly.
arXiv Detail & Related papers (2022-02-17T00:30:40Z) - Anatomizing Bias in Facial Analysis [86.79402670904338]
Existing facial analysis systems have been shown to yield biased results against certain demographic subgroups.
It has become imperative to ensure that these systems do not discriminate based on gender, identity, or skin tone of individuals.
This has led to research in the identification and mitigation of bias in AI systems.
arXiv Detail & Related papers (2021-12-13T09:51:13Z) - Balancing out Bias: Achieving Fairness Through Training Reweighting [58.201275105195485]
Bias in natural language processing arises from models learning characteristics of the author such as gender and race.
Existing methods for mitigating and measuring bias do not directly account for correlations between author demographics and linguistic variables.
This paper introduces a very simple but highly effective method for countering bias using instance reweighting.
arXiv Detail & Related papers (2021-09-16T23:40:28Z) - Unsupervised Learning of Debiased Representations with Pseudo-Attributes [85.5691102676175]
We propose a simple but effective debiasing technique in an unsupervised manner.
We perform clustering on the feature embedding space and identify pseudoattributes by taking advantage of the clustering results.
We then employ a novel cluster-based reweighting scheme for learning debiased representation.
arXiv Detail & Related papers (2021-08-06T05:20:46Z) - Balancing Biases and Preserving Privacy on Balanced Faces in the Wild [50.915684171879036]
There are demographic biases present in current facial recognition (FR) models.
We introduce our Balanced Faces in the Wild dataset to measure these biases across different ethnic and gender subgroups.
We find that relying on a single score threshold to differentiate between genuine and imposters sample pairs leads to suboptimal results.
We propose a novel domain adaptation learning scheme that uses facial features extracted from state-of-the-art neural networks.
arXiv Detail & Related papers (2021-03-16T15:05:49Z) - Mitigating Face Recognition Bias via Group Adaptive Classifier [53.15616844833305]
This work aims to learn a fair face representation, where faces of every group could be more equally represented.
Our work is able to mitigate face recognition bias across demographic groups while maintaining the competitive accuracy.
arXiv Detail & Related papers (2020-06-13T06:43:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.