SensitiveLoss: Improving Accuracy and Fairness of Face Representations
with Discrimination-Aware Deep Learning
- URL: http://arxiv.org/abs/2004.11246v2
- Date: Wed, 2 Dec 2020 16:22:01 GMT
- Title: SensitiveLoss: Improving Accuracy and Fairness of Face Representations
with Discrimination-Aware Deep Learning
- Authors: Ignacio Serna, Aythami Morales, Julian Fierrez, Manuel Cebrian, Nick
Obradovich, and Iyad Rahwan
- Abstract summary: We propose a discrimination-aware learning method to improve accuracy and fairness of biased face recognition algorithms.
We experimentally show that learning processes based on the most used face databases have led to popular pre-trained deep face models that present a strong algorithmic discrimination.
Our approach works as an add-on to pre-trained networks and is used to improve their performance in terms of average accuracy and fairness.
- Score: 17.088716485755917
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a discrimination-aware learning method to improve both accuracy
and fairness of biased face recognition algorithms. The most popular face
recognition benchmarks assume a distribution of subjects without paying much
attention to their demographic attributes. In this work, we perform a
comprehensive discrimination-aware experimentation of deep learning-based face
recognition. We also propose a general formulation of algorithmic
discrimination with application to face biometrics. The experiments include
tree popular face recognition models and three public databases composed of
64,000 identities from different demographic groups characterized by gender and
ethnicity. We experimentally show that learning processes based on the most
used face databases have led to popular pre-trained deep face models that
present a strong algorithmic discrimination. We finally propose a
discrimination-aware learning method, Sensitive Loss, based on the popular
triplet loss function and a sensitive triplet generator. Our approach works as
an add-on to pre-trained networks and is used to improve their performance in
terms of average accuracy and fairness. The method shows results comparable to
state-of-the-art de-biasing networks and represents a step forward to prevent
discriminatory effects by automatic systems.
Related papers
- Improving Bias in Facial Attribute Classification: A Combined Impact of KL Divergence induced Loss Function and Dual Attention [3.5527561584422465]
Earlier systems often exhibited demographic bias, particularly in gender and racial classification, with lower accuracy for women and individuals with darker skin tones.
This paper presents a method using a dual attention mechanism with a pre-trained Inception-ResNet V1 model, enhanced by KL-divergence regularization and a cross-entropy loss function.
The experimental results show significant improvements in both fairness and classification accuracy, providing promising advances in addressing bias and enhancing the reliability of facial recognition systems.
arXiv Detail & Related papers (2024-10-15T01:29:09Z) - LabellessFace: Fair Metric Learning for Face Recognition without Attribute Labels [0.11999555634662631]
This paper introduces LabellessFace'', a framework that improves demographic bias in face recognition without requiring demographic group labeling.
We propose a novel fairness enhancement metric called the class favoritism level, which assesses the extent of favoritism towards specific classes.
This method dynamically adjusts learning parameters based on class favoritism levels, promoting fairness across all attributes.
arXiv Detail & Related papers (2024-09-14T02:56:07Z) - The Impact of Racial Distribution in Training Data on Face Recognition
Bias: A Closer Look [0.0]
We study the effect of racial distribution in the training data on the performance of face recognition models.
We analyze these trained models using accuracy metrics, clustering metrics, UMAP projections, face quality, and decision thresholds.
arXiv Detail & Related papers (2022-11-26T07:03:24Z) - Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face
Recognition [107.58227666024791]
Face recognition systems are widely deployed in safety-critical applications, including law enforcement.
They exhibit bias across a range of socio-demographic dimensions, such as gender and race.
Previous works on bias mitigation largely focused on pre-processing the training data.
arXiv Detail & Related papers (2022-10-18T15:46:05Z) - Towards Intrinsic Common Discriminative Features Learning for Face
Forgery Detection using Adversarial Learning [59.548960057358435]
We propose a novel method which utilizes adversarial learning to eliminate the negative effect of different forgery methods and facial identities.
Our face forgery detection model learns to extract common discriminative features through eliminating the effect of forgery methods and facial identities.
arXiv Detail & Related papers (2022-07-08T09:23:59Z) - Meta Balanced Network for Fair Face Recognition [51.813457201437195]
We systematically and scientifically study bias from both data and algorithm aspects.
We propose a novel meta-learning algorithm, called Meta Balanced Network (MBN), which learns adaptive margins in large margin loss.
Extensive experiments show that MBN successfully mitigates bias and learns more balanced performance for people with different skin tones in face recognition.
arXiv Detail & Related papers (2022-05-13T10:25:44Z) - Evaluating Proposed Fairness Models for Face Recognition Algorithms [0.0]
This paper characterizes two proposed measures of face recognition algorithm fairness (fairness measures) from scientists in the U.S. and Europe.
We propose a set of interpretability criteria, termed the Functional Fairness Measure Criteria (FFMC), that outlines a set of properties desirable in a face recognition algorithm fairness measure.
We believe this is currently the largest open-source dataset of its kind.
arXiv Detail & Related papers (2022-03-09T21:16:43Z) - Facial Expressions as a Vulnerability in Face Recognition [73.85525896663371]
This work explores facial expression bias as a security vulnerability of face recognition systems.
We present a comprehensive analysis of how facial expression bias impacts the performance of face recognition technologies.
arXiv Detail & Related papers (2020-11-17T18:12:41Z) - Mitigating Face Recognition Bias via Group Adaptive Classifier [53.15616844833305]
This work aims to learn a fair face representation, where faces of every group could be more equally represented.
Our work is able to mitigate face recognition bias across demographic groups while maintaining the competitive accuracy.
arXiv Detail & Related papers (2020-06-13T06:43:37Z) - Investigating the Impact of Inclusion in Face Recognition Training Data
on Individual Face Identification [93.5538147928669]
We audit ArcFace, a state-of-the-art, open source face recognition system, in a large-scale face identification experiment with more than one million distractor images.
We find a Rank-1 face identification accuracy of 79.71% for individuals present in the model's training data and an accuracy of 75.73% for those not present.
arXiv Detail & Related papers (2020-01-09T15:50:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.