Benchmarking Algorithmic Bias in Face Recognition: An Experimental
Approach Using Synthetic Faces and Human Evaluation
- URL: http://arxiv.org/abs/2308.05441v1
- Date: Thu, 10 Aug 2023 08:57:31 GMT
- Title: Benchmarking Algorithmic Bias in Face Recognition: An Experimental
Approach Using Synthetic Faces and Human Evaluation
- Authors: Hao Liang, Pietro Perona and Guha Balakrishnan
- Abstract summary: We propose an experimental method for measuring bias in face recognition systems.
Our method is based on generating synthetic faces using a neural face generator.
We validate our method quantitatively by evaluating race and gender biases of three research-grade face recognition models.
- Score: 24.35436087740559
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose an experimental method for measuring bias in face recognition
systems. Existing methods to measure bias depend on benchmark datasets that are
collected in the wild and annotated for protected (e.g., race, gender) and
non-protected (e.g., pose, lighting) attributes. Such observational datasets
only permit correlational conclusions, e.g., "Algorithm A's accuracy is
different on female and male faces in dataset X.". By contrast, experimental
methods manipulate attributes individually and thus permit causal conclusions,
e.g., "Algorithm A's accuracy is affected by gender and skin color."
Our method is based on generating synthetic faces using a neural face
generator, where each attribute of interest is modified independently while
leaving all other attributes constant. Human observers crucially provide the
ground truth on perceptual identity similarity between synthetic image pairs.
We validate our method quantitatively by evaluating race and gender biases of
three research-grade face recognition models. Our synthetic pipeline reveals
that for these algorithms, accuracy is lower for Black and East Asian
population subgroups. Our method can also quantify how perceptual changes in
attributes affect face identity distances reported by these models. Our large
synthetic dataset, consisting of 48,000 synthetic face image pairs (10,200
unique synthetic faces) and 555,000 human annotations (individual attributes
and pairwise identity comparisons) is available to researchers in this
important area.
Related papers
- Bias and Diversity in Synthetic-based Face Recognition [12.408456748469426]
We investigate how the diversity of synthetic face recognition datasets compares to authentic datasets.
We look at the distribution of gender, ethnicity, age, and head position.
With regard to bias, it can be seen that the synthetic-based models share a similar bias behavior with the authentic-based models.
arXiv Detail & Related papers (2023-11-07T13:12:34Z) - The Impact of Racial Distribution in Training Data on Face Recognition
Bias: A Closer Look [0.0]
We study the effect of racial distribution in the training data on the performance of face recognition models.
We analyze these trained models using accuracy metrics, clustering metrics, UMAP projections, face quality, and decision thresholds.
arXiv Detail & Related papers (2022-11-26T07:03:24Z) - Explaining Bias in Deep Face Recognition via Image Characteristics [9.569575076277523]
We evaluate ten state-of-the-art face recognition models, comparing their fairness in terms of security and usability on two data sets.
We then analyze the impact of image characteristics on models performance.
arXiv Detail & Related papers (2022-08-23T17:18:23Z) - Towards Intrinsic Common Discriminative Features Learning for Face
Forgery Detection using Adversarial Learning [59.548960057358435]
We propose a novel method which utilizes adversarial learning to eliminate the negative effect of different forgery methods and facial identities.
Our face forgery detection model learns to extract common discriminative features through eliminating the effect of forgery methods and facial identities.
arXiv Detail & Related papers (2022-07-08T09:23:59Z) - Meta Balanced Network for Fair Face Recognition [51.813457201437195]
We systematically and scientifically study bias from both data and algorithm aspects.
We propose a novel meta-learning algorithm, called Meta Balanced Network (MBN), which learns adaptive margins in large margin loss.
Extensive experiments show that MBN successfully mitigates bias and learns more balanced performance for people with different skin tones in face recognition.
arXiv Detail & Related papers (2022-05-13T10:25:44Z) - Mitigating Bias in Facial Analysis Systems by Incorporating Label
Diversity [4.089080285684415]
We introduce a novel learning method that combines subjective human-based labels and objective annotations based on mathematical definitions of facial traits.
Our method successfully mitigates unintended biases, while maintaining significant accuracy on the downstream task.
arXiv Detail & Related papers (2022-04-13T13:17:27Z) - SynFace: Face Recognition with Synthetic Data [83.15838126703719]
We devise the SynFace with identity mixup (IM) and domain mixup (DM) to mitigate the performance gap.
We also perform a systematically empirical analysis on synthetic face images to provide some insights on how to effectively utilize synthetic data for face recognition.
arXiv Detail & Related papers (2021-08-18T03:41:54Z) - Unravelling the Effect of Image Distortions for Biased Prediction of
Pre-trained Face Recognition Models [86.79402670904338]
We evaluate the performance of four state-of-the-art deep face recognition models in the presence of image distortions.
We have observed that image distortions have a relationship with the performance gap of the model across different subgroups.
arXiv Detail & Related papers (2021-08-14T16:49:05Z) - Unsupervised Learning Facial Parameter Regressor for Action Unit
Intensity Estimation via Differentiable Renderer [51.926868759681014]
We present a framework to predict the facial parameters based on a bone-driven face model (BDFM) under different views.
The proposed framework consists of a feature extractor, a generator, and a facial parameter regressor.
arXiv Detail & Related papers (2020-08-20T09:49:13Z) - Towards causal benchmarking of bias in face analysis algorithms [54.19499274513654]
We develop an experimental method for measuring algorithmic bias of face analysis algorithms.
Our proposed method is based on generating synthetic transects'' of matched sample images.
We validate our method by comparing it to a study that employs the traditional observational method for analyzing bias in gender classification algorithms.
arXiv Detail & Related papers (2020-07-13T17:10:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.