Balancing Beyond Discrete Categories: Continuous Demographic Labels for Fair Face Recognition
- URL: http://arxiv.org/abs/2506.01532v4
- Date: Fri, 06 Jun 2025 14:56:32 GMT
- Title: Balancing Beyond Discrete Categories: Continuous Demographic Labels for Fair Face Recognition
- Authors: Pedro C. Neto, Naser Damer, Jaime S. Cardoso, Ana F. Sequeira,
- Abstract summary: We propose to revise our use of ethnicity labels as a continuous variable instead of a discrete value per identity.<n>We show that having the same number of identities per ethnicity does not represent a balanced dataset.<n>We trained more than 65 different models, and created more than 20 subsets of the original datasets.
- Score: 7.989700021807903
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bias has been a constant in face recognition models. Over the years, researchers have looked at it from both the model and the data point of view. However, their approach to mitigation of data bias was limited and lacked insight on the real nature of the problem. Here, in this document, we propose to revise our use of ethnicity labels as a continuous variable instead of a discrete value per identity. We validate our formulation both experimentally and theoretically, showcasing that not all identities from one ethnicity contribute equally to the balance of the dataset; thus, having the same number of identities per ethnicity does not represent a balanced dataset. We further show that models trained on datasets balanced in the continuous space consistently outperform models trained on data balanced in the discrete space. We trained more than 65 different models, and created more than 20 subsets of the original datasets.
Related papers
- Biased Heritage: How Datasets Shape Models in Facial Expression Recognition [13.77824359359967]
We study bias propagation from datasets to trained models in image-based Facial Expression Recognition systems.<n>We introduce new bias metrics specifically designed for multiclass problems with multiple demographic groups.<n>Our findings suggest that preventing emotion-specific demographic patterns should be prioritized over general demographic balance in FER datasets.
arXiv Detail & Related papers (2025-03-05T12:25:22Z) - The Impact of Balancing Real and Synthetic Data on Accuracy and Fairness in Face Recognition [10.849598219674132]
We investigate the impact of demographically balanced authentic and synthetic data, both individually and in combination, on the accuracy and fairness of face recognition models.
Our findings emphasize two main points: (i) the increased effectiveness of training data generated by diffusion-based models in enhancing accuracy, whether used alone or combined with subsets of authentic data, and (ii) the minimal impact of incorporating balanced data from pre-trained generative methods on fairness.
arXiv Detail & Related papers (2024-09-04T16:50:48Z) - Thinking Racial Bias in Fair Forgery Detection: Models, Datasets and Evaluations [63.52709761339949]
We first contribute a dedicated dataset called the Fair Forgery Detection (FairFD) dataset, where we prove the racial bias of public state-of-the-art (SOTA) methods.<n>We design novel metrics including Approach Averaged Metric and Utility Regularized Metric, which can avoid deceptive results.<n>We also present an effective and robust post-processing technique, Bias Pruning with Fair Activations (BPFA), which improves fairness without requiring retraining or weight updates.
arXiv Detail & Related papers (2024-07-19T14:53:18Z) - Toward Fairer Face Recognition Datasets [69.04239222633795]
Face recognition and verification are computer vision tasks whose performance has progressed with the introduction of deep representations.
Ethical, legal, and technical challenges due to the sensitive character of face data and biases in real training datasets hinder their development.
We promote fairness by introducing a demographic attributes balancing mechanism in generated training datasets.
arXiv Detail & Related papers (2024-06-24T12:33:21Z) - Leveraging Prototypical Representations for Mitigating Social Bias without Demographic Information [50.29934517930506]
DAFair is a novel approach to address social bias in language models.
We leverage prototypical demographic texts and incorporate a regularization term during the fine-tuning process to mitigate bias.
arXiv Detail & Related papers (2024-03-14T15:58:36Z) - Bias and Diversity in Synthetic-based Face Recognition [12.408456748469426]
We investigate how the diversity of synthetic face recognition datasets compares to authentic datasets.
We look at the distribution of gender, ethnicity, age, and head position.
With regard to bias, it can be seen that the synthetic-based models share a similar bias behavior with the authentic-based models.
arXiv Detail & Related papers (2023-11-07T13:12:34Z) - Toward responsible face datasets: modeling the distribution of a
disentangled latent space for sampling face images from demographic groups [0.0]
Recently, it has been exposed that some modern facial recognition systems could discriminate specific demographic groups.
We propose to use a simple method for modeling and sampling a disentangled projection of a StyleGAN latent space to generate any combination of demographic groups.
Our experiments show that we can synthesis any combination of demographic groups effectively and the identities are different from the original training dataset.
arXiv Detail & Related papers (2023-09-15T14:42:04Z) - Zero-shot racially balanced dataset generation using an existing biased
StyleGAN2 [5.463417677777276]
We propose a methodology that leverages the biased generative model StyleGAN2 to create demographically diverse images of synthetic individuals.
By training face recognition models with the resulting balanced dataset containing 50,000 identities per race, we can improve their performance and minimize biases that might have been present in a model trained on a real dataset.
arXiv Detail & Related papers (2023-05-12T18:07:10Z) - Unravelling the Effect of Image Distortions for Biased Prediction of
Pre-trained Face Recognition Models [86.79402670904338]
We evaluate the performance of four state-of-the-art deep face recognition models in the presence of image distortions.
We have observed that image distortions have a relationship with the performance gap of the model across different subgroups.
arXiv Detail & Related papers (2021-08-14T16:49:05Z) - Balancing Biases and Preserving Privacy on Balanced Faces in the Wild [50.915684171879036]
There are demographic biases present in current facial recognition (FR) models.
We introduce our Balanced Faces in the Wild dataset to measure these biases across different ethnic and gender subgroups.
We find that relying on a single score threshold to differentiate between genuine and imposters sample pairs leads to suboptimal results.
We propose a novel domain adaptation learning scheme that uses facial features extracted from state-of-the-art neural networks.
arXiv Detail & Related papers (2021-03-16T15:05:49Z) - Enhancing Facial Data Diversity with Style-based Face Aging [59.984134070735934]
In particular, face datasets are typically biased in terms of attributes such as gender, age, and race.
We propose a novel, generative style-based architecture for data augmentation that captures fine-grained aging patterns.
We show that the proposed method outperforms state-of-the-art algorithms for age transfer.
arXiv Detail & Related papers (2020-06-06T21:53:44Z) - Investigating Bias in Deep Face Analysis: The KANFace Dataset and
Empirical Study [67.3961439193994]
We introduce the most comprehensive, large-scale dataset of facial images and videos to date.
The data are manually annotated in terms of identity, exact age, gender and kinship.
A method to debias network embeddings is introduced and tested on the proposed benchmarks.
arXiv Detail & Related papers (2020-05-15T00:14:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.