SensitiveLoss: Improving Accuracy and Fairness of Face Representations
with Discrimination-Aware Deep Learning
- URL: http://arxiv.org/abs/2004.11246v2
- Date: Wed, 2 Dec 2020 16:22:01 GMT
- Title: SensitiveLoss: Improving Accuracy and Fairness of Face Representations
with Discrimination-Aware Deep Learning
- Authors: Ignacio Serna, Aythami Morales, Julian Fierrez, Manuel Cebrian, Nick
Obradovich, and Iyad Rahwan
- Abstract summary: We propose a discrimination-aware learning method to improve accuracy and fairness of biased face recognition algorithms.
We experimentally show that learning processes based on the most used face databases have led to popular pre-trained deep face models that present a strong algorithmic discrimination.
Our approach works as an add-on to pre-trained networks and is used to improve their performance in terms of average accuracy and fairness.
- Score: 17.088716485755917
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a discrimination-aware learning method to improve both accuracy
and fairness of biased face recognition algorithms. The most popular face
recognition benchmarks assume a distribution of subjects without paying much
attention to their demographic attributes. In this work, we perform a
comprehensive discrimination-aware experimentation of deep learning-based face
recognition. We also propose a general formulation of algorithmic
discrimination with application to face biometrics. The experiments include
tree popular face recognition models and three public databases composed of
64,000 identities from different demographic groups characterized by gender and
ethnicity. We experimentally show that learning processes based on the most
used face databases have led to popular pre-trained deep face models that
present a strong algorithmic discrimination. We finally propose a
discrimination-aware learning method, Sensitive Loss, based on the popular
triplet loss function and a sensitive triplet generator. Our approach works as
an add-on to pre-trained networks and is used to improve their performance in
terms of average accuracy and fairness. The method shows results comparable to
state-of-the-art de-biasing networks and represents a step forward to prevent
discriminatory effects by automatic systems.
Related papers
- Analysis of Recent Trends in Face Recognition Systems [0.0]
Due to inter-class similarities and intra-class variations, face recognition systems generate false match and false non-match errors respectively.
Recent research focuses on improving the robustness of extracted features and the pre-processing algorithms to enhance recognition accuracy.
arXiv Detail & Related papers (2023-04-23T18:55:45Z) - The Impact of Racial Distribution in Training Data on Face Recognition
Bias: A Closer Look [0.0]
We study the effect of racial distribution in the training data on the performance of face recognition models.
We analyze these trained models using accuracy metrics, clustering metrics, UMAP projections, face quality, and decision thresholds.
arXiv Detail & Related papers (2022-11-26T07:03:24Z) - Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face
Recognition [107.58227666024791]
Face recognition systems are widely deployed in safety-critical applications, including law enforcement.
They exhibit bias across a range of socio-demographic dimensions, such as gender and race.
Previous works on bias mitigation largely focused on pre-processing the training data.
arXiv Detail & Related papers (2022-10-18T15:46:05Z) - Towards Intrinsic Common Discriminative Features Learning for Face
Forgery Detection using Adversarial Learning [59.548960057358435]
We propose a novel method which utilizes adversarial learning to eliminate the negative effect of different forgery methods and facial identities.
Our face forgery detection model learns to extract common discriminative features through eliminating the effect of forgery methods and facial identities.
arXiv Detail & Related papers (2022-07-08T09:23:59Z) - Meta Balanced Network for Fair Face Recognition [51.813457201437195]
We systematically and scientifically study bias from both data and algorithm aspects.
We propose a novel meta-learning algorithm, called Meta Balanced Network (MBN), which learns adaptive margins in large margin loss.
Extensive experiments show that MBN successfully mitigates bias and learns more balanced performance for people with different skin tones in face recognition.
arXiv Detail & Related papers (2022-05-13T10:25:44Z) - Evaluating Proposed Fairness Models for Face Recognition Algorithms [0.0]
This paper characterizes two proposed measures of face recognition algorithm fairness (fairness measures) from scientists in the U.S. and Europe.
We propose a set of interpretability criteria, termed the Functional Fairness Measure Criteria (FFMC), that outlines a set of properties desirable in a face recognition algorithm fairness measure.
We believe this is currently the largest open-source dataset of its kind.
arXiv Detail & Related papers (2022-03-09T21:16:43Z) - Fairness Properties of Face Recognition and Obfuscation Systems [19.195705814819306]
Face obfuscation systems generate imperceptible perturbations, when added to an image, cause the facial recognition system to misidentify the user.
This dependence of face obfuscation on metric embedding networks, which are known to be unfair in the context of facial recognition, surfaces the question of demographic fairness.
We find that metric embedding networks are demographically aware; they cluster faces in the embedding space based on their demographic attributes.
arXiv Detail & Related papers (2021-08-05T16:18:15Z) - Facial Expressions as a Vulnerability in Face Recognition [73.85525896663371]
This work explores facial expression bias as a security vulnerability of face recognition systems.
We present a comprehensive analysis of how facial expression bias impacts the performance of face recognition technologies.
arXiv Detail & Related papers (2020-11-17T18:12:41Z) - Mitigating Face Recognition Bias via Group Adaptive Classifier [53.15616844833305]
This work aims to learn a fair face representation, where faces of every group could be more equally represented.
Our work is able to mitigate face recognition bias across demographic groups while maintaining the competitive accuracy.
arXiv Detail & Related papers (2020-06-13T06:43:37Z) - Investigating the Impact of Inclusion in Face Recognition Training Data
on Individual Face Identification [93.5538147928669]
We audit ArcFace, a state-of-the-art, open source face recognition system, in a large-scale face identification experiment with more than one million distractor images.
We find a Rank-1 face identification accuracy of 79.71% for individuals present in the model's training data and an accuracy of 75.73% for those not present.
arXiv Detail & Related papers (2020-01-09T15:50:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.