On GANs perpetuating biases for face verification
- URL: http://arxiv.org/abs/2208.13061v1
- Date: Sat, 27 Aug 2022 17:47:09 GMT
- Title: On GANs perpetuating biases for face verification
- Authors: Sasikanth Kotti, Mayank Vatsa, Richa Singh
- Abstract summary: We show that data generated from generative models such as GANs are prone to bias and fairness issues.
Specifically GANs trained on FFHQ dataset show bias towards generating white faces in the age group of 20-29.
- Score: 75.99046162669997
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: DeepLearningsystemsneedlargedatafortraining.Datasets for training face
verification systems are difficult to obtain and prone to privacy issues.
Synthetic data generated by generative models such as GANs can be a good
alternative. However, we show that data generated from GANs are prone to bias
and fairness issues. Specifically GANs trained on FFHQ dataset show bias
towards generating white faces in the age group of 20-29. We also demonstrate
that synthetic faces cause disparate impact, specifically for race attribute,
when used for fine tuning face verification systems. This is measured using
$DoB_{fv}$ metric, which is defined as standard deviation of GAR@FAR for face
verification.
Related papers
- Distributionally Generative Augmentation for Fair Facial Attribute Classification [69.97710556164698]
Facial Attribute Classification (FAC) holds substantial promise in widespread applications.
FAC models trained by traditional methodologies can be unfair by exhibiting accuracy inconsistencies across varied data subpopulations.
This work proposes a novel, generation-based two-stage framework to train a fair FAC model on biased data without additional annotation.
arXiv Detail & Related papers (2024-03-11T10:50:53Z) - A Dataless FaceSwap Detection Approach Using Synthetic Images [5.73382615946951]
We propose a deepfake detection methodology that eliminates the need for any real data by making use of synthetically generated data using StyleGAN3.
This not only performs at par with the traditional training methodology of using real data but it shows better generalization capabilities when finetuned with a small amount of real data.
arXiv Detail & Related papers (2022-12-05T19:49:45Z) - Are Face Detection Models Biased? [69.68854430664399]
We investigate possible bias in the domain of face detection through facial region localization.
Most existing face detection datasets lack suitable annotation for such analysis.
We observe a high disparity in detection accuracies across gender and skin-tone, and interplay of confounding factors beyond demography.
arXiv Detail & Related papers (2022-11-07T14:27:55Z) - Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face
Recognition [107.58227666024791]
Face recognition systems are widely deployed in safety-critical applications, including law enforcement.
They exhibit bias across a range of socio-demographic dimensions, such as gender and race.
Previous works on bias mitigation largely focused on pre-processing the training data.
arXiv Detail & Related papers (2022-10-18T15:46:05Z) - DECAF: Generating Fair Synthetic Data Using Causally-Aware Generative
Networks [71.6879432974126]
We introduce DECAF: a GAN-based fair synthetic data generator for tabular data.
We show that DECAF successfully removes undesired bias and is capable of generating high-quality synthetic data.
We provide theoretical guarantees on the generator's convergence and the fairness of downstream models.
arXiv Detail & Related papers (2021-10-25T12:39:56Z) - Balancing Biases and Preserving Privacy on Balanced Faces in the Wild [50.915684171879036]
There are demographic biases present in current facial recognition (FR) models.
We introduce our Balanced Faces in the Wild dataset to measure these biases across different ethnic and gender subgroups.
We find that relying on a single score threshold to differentiate between genuine and imposters sample pairs leads to suboptimal results.
We propose a novel domain adaptation learning scheme that uses facial features extracted from state-of-the-art neural networks.
arXiv Detail & Related papers (2021-03-16T15:05:49Z) - Imperfect ImaGANation: Implications of GANs Exacerbating Biases on
Facial Data Augmentation and Snapchat Selfie Lenses [20.36399588424965]
We show that popular Generative Adversarial Networks (GANs) exacerbate biases along the axes of gender and skin tone when given a skewed distribution of face-shots.
GANs also exacerbate biases by lightening skin color of non-white faces and transforming female facial features to be masculine when generating faces of engineering professors.
arXiv Detail & Related papers (2020-01-26T21:57:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.