MixFairFace: Towards Ultimate Fairness via MixFair Adapter in Face
Recognition
- URL: http://arxiv.org/abs/2211.15181v1
- Date: Mon, 28 Nov 2022 09:47:21 GMT
- Title: MixFairFace: Towards Ultimate Fairness via MixFair Adapter in Face
Recognition
- Authors: Fu-En Wang, Chien-Yi Wang, Min Sun, Shang-Hong Lai
- Abstract summary: We argue that the commonly used attribute-based fairness metric is not appropriate for face recognition.
We propose a new evaluation protocol to fairly evaluate the fairness performance of different approaches.
Our MixFairFace approach achieves state-of-the-art fairness performance on all benchmark datasets.
- Score: 37.756287362799945
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although significant progress has been made in face recognition, demographic
bias still exists in face recognition systems. For instance, it usually happens
that the face recognition performance for a certain demographic group is lower
than the others. In this paper, we propose MixFairFace framework to improve the
fairness in face recognition models. First of all, we argue that the commonly
used attribute-based fairness metric is not appropriate for face recognition. A
face recognition system can only be considered fair while every person has a
close performance. Hence, we propose a new evaluation protocol to fairly
evaluate the fairness performance of different approaches. Different from
previous approaches that require sensitive attribute labels such as race and
gender for reducing the demographic bias, we aim at addressing the identity
bias in face representation, i.e., the performance inconsistency between
different identities, without the need for sensitive attribute labels. To this
end, we propose MixFair Adapter to determine and reduce the identity bias of
training samples. Our extensive experiments demonstrate that our MixFairFace
approach achieves state-of-the-art fairness performance on all benchmark
datasets.
Related papers
- Score Normalization for Demographic Fairness in Face Recognition [16.421833444307232]
Well-known sample-centered score normalization techniques, Z-norm and T-norm, do not improve fairness for high-security operating points.
We extend the standard Z/T-norm to integrate demographic information in normalization.
We show that our techniques generally improve the overall fairness of five state-of-the-art pre-trained face recognition networks.
arXiv Detail & Related papers (2024-07-19T07:51:51Z) - Toward Fairer Face Recognition Datasets [69.04239222633795]
Face recognition and verification are computer vision tasks whose performance has progressed with the introduction of deep representations.
Ethical, legal, and technical challenges due to the sensitive character of face data and biases in real training datasets hinder their development.
We promote fairness by introducing a demographic attributes balancing mechanism in generated training datasets.
arXiv Detail & Related papers (2024-06-24T12:33:21Z) - Distributionally Generative Augmentation for Fair Facial Attribute Classification [69.97710556164698]
Facial Attribute Classification (FAC) holds substantial promise in widespread applications.
FAC models trained by traditional methodologies can be unfair by exhibiting accuracy inconsistencies across varied data subpopulations.
This work proposes a novel, generation-based two-stage framework to train a fair FAC model on biased data without additional annotation.
arXiv Detail & Related papers (2024-03-11T10:50:53Z) - Human-Machine Comparison for Cross-Race Face Verification: Race Bias at
the Upper Limits of Performance? [0.7036032466145111]
Face recognition algorithms perform more accurately than humans in some cases, though humans and machines both show race-based accuracy differences.
We constructed a challenging test of 'cross-race' face verification and used it to compare humans and two state-of-the-art face recognition systems.
We conclude that state-of-the-art systems for identity verification between two frontal face images of Black and White individuals can surpass the general population.
arXiv Detail & Related papers (2023-05-25T19:41:13Z) - The Impact of Racial Distribution in Training Data on Face Recognition
Bias: A Closer Look [0.0]
We study the effect of racial distribution in the training data on the performance of face recognition models.
We analyze these trained models using accuracy metrics, clustering metrics, UMAP projections, face quality, and decision thresholds.
arXiv Detail & Related papers (2022-11-26T07:03:24Z) - Meta Balanced Network for Fair Face Recognition [51.813457201437195]
We systematically and scientifically study bias from both data and algorithm aspects.
We propose a novel meta-learning algorithm, called Meta Balanced Network (MBN), which learns adaptive margins in large margin loss.
Extensive experiments show that MBN successfully mitigates bias and learns more balanced performance for people with different skin tones in face recognition.
arXiv Detail & Related papers (2022-05-13T10:25:44Z) - Learning Fair Face Representation With Progressive Cross Transformer [79.73754444296213]
We propose a progressive cross transformer (PCT) method for fair face recognition.
We show that PCT is capable of mitigating bias in face recognition while achieving state-of-the-art FR performance.
arXiv Detail & Related papers (2021-08-11T01:31:14Z) - Mitigating Face Recognition Bias via Group Adaptive Classifier [53.15616844833305]
This work aims to learn a fair face representation, where faces of every group could be more equally represented.
Our work is able to mitigate face recognition bias across demographic groups while maintaining the competitive accuracy.
arXiv Detail & Related papers (2020-06-13T06:43:37Z) - SensitiveLoss: Improving Accuracy and Fairness of Face Representations
with Discrimination-Aware Deep Learning [17.088716485755917]
We propose a discrimination-aware learning method to improve accuracy and fairness of biased face recognition algorithms.
We experimentally show that learning processes based on the most used face databases have led to popular pre-trained deep face models that present a strong algorithmic discrimination.
Our approach works as an add-on to pre-trained networks and is used to improve their performance in terms of average accuracy and fairness.
arXiv Detail & Related papers (2020-04-22T10:32:16Z) - Post-Comparison Mitigation of Demographic Bias in Face Recognition Using
Fair Score Normalization [15.431761867166]
We propose a novel unsupervised fair score normalization approach to reduce the effect of bias in face recognition.
Our solution reduces demographic biases by up to 82.7% in the case when gender is considered.
In contrast to previous works, our fair normalization approach enhances the overall performance by up to 53.2% at false match rate of 0.001 and up to 82.9% at a false match rate of 0.00001.
arXiv Detail & Related papers (2020-02-10T08:17:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.