Balancing Biases and Preserving Privacy on Balanced Faces in the Wild
- URL: http://arxiv.org/abs/2103.09118v5
- Date: Wed, 5 Jul 2023 20:06:22 GMT
- Title: Balancing Biases and Preserving Privacy on Balanced Faces in the Wild
- Authors: Joseph P Robinson and Can Qin and Yann Henon and Samson Timoner and
Yun Fu
- Abstract summary: There are demographic biases present in current facial recognition (FR) models.
We introduce our Balanced Faces in the Wild dataset to measure these biases across different ethnic and gender subgroups.
We find that relying on a single score threshold to differentiate between genuine and imposters sample pairs leads to suboptimal results.
We propose a novel domain adaptation learning scheme that uses facial features extracted from state-of-the-art neural networks.
- Score: 50.915684171879036
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There are demographic biases present in current facial recognition (FR)
models. To measure these biases across different ethnic and gender subgroups,
we introduce our Balanced Faces in the Wild (BFW) dataset. This dataset allows
for the characterization of FR performance per subgroup. We found that relying
on a single score threshold to differentiate between genuine and imposters
sample pairs leads to suboptimal results. Additionally, performance within
subgroups often varies significantly from the global average. Therefore,
specific error rates only hold for populations that match the validation data.
To mitigate imbalanced performances, we propose a novel domain adaptation
learning scheme that uses facial features extracted from state-of-the-art
neural networks. This scheme boosts the average performance and preserves
identity information while removing demographic knowledge. Removing demographic
knowledge prevents potential biases from affecting decision-making and protects
privacy by eliminating demographic information. We explore the proposed method
and demonstrate that subgroup classifiers can no longer learn from features
projected using our domain adaptation scheme. For access to the source code and
data, please visit https://github.com/visionjo/facerec-bias-bfw.
Related papers
- Invariant Feature Regularization for Fair Face Recognition [45.23154294914808]
We show that biased feature generalizes poorly in the minority group.
We propose to generate diverse data partitions iteratively in an unsupervised fashion.
INV-REG leads to new state-of-the-art that improves face recognition on a variety of demographic groups.
arXiv Detail & Related papers (2023-10-23T07:44:12Z) - Causality and Independence Enhancement for Biased Node Classification [56.38828085943763]
We propose a novel Causality and Independence Enhancement (CIE) framework, applicable to various graph neural networks (GNNs)
Our approach estimates causal and spurious features at the node representation level and mitigates the influence of spurious correlations.
Our approach CIE not only significantly enhances the performance of GNNs but outperforms state-of-the-art debiased node classification methods.
arXiv Detail & Related papers (2023-10-14T13:56:24Z) - Deep Learning on a Healthy Data Diet: Finding Important Examples for
Fairness [15.210232622716129]
Data-driven predictive solutions predominant in commercial applications tend to suffer from biases and stereotypes.
Data augmentation reduces gender bias by adding counterfactual examples to the training dataset.
We show that some of the examples in the augmented dataset can be not important or even harmful for fairness.
arXiv Detail & Related papers (2022-11-20T22:42:30Z) - Gender Stereotyping Impact in Facial Expression Recognition [1.5340540198612824]
In recent years, machine learning-based models have become the most popular approach to Facial Expression Recognition (FER)
In publicly available FER datasets, apparent gender representation is usually mostly balanced, but their representation in the individual label is not.
We generate derivative datasets with different amounts of stereotypical bias by altering the gender proportions of certain labels.
We observe a discrepancy in the recognition of certain emotions between genders of up to $29 %$ under the worst bias conditions.
arXiv Detail & Related papers (2022-10-11T10:52:23Z) - On GANs perpetuating biases for face verification [75.99046162669997]
We show that data generated from generative models such as GANs are prone to bias and fairness issues.
Specifically GANs trained on FFHQ dataset show bias towards generating white faces in the age group of 20-29.
arXiv Detail & Related papers (2022-08-27T17:47:09Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Mitigating Face Recognition Bias via Group Adaptive Classifier [53.15616844833305]
This work aims to learn a fair face representation, where faces of every group could be more equally represented.
Our work is able to mitigate face recognition bias across demographic groups while maintaining the competitive accuracy.
arXiv Detail & Related papers (2020-06-13T06:43:37Z) - Enhancing Facial Data Diversity with Style-based Face Aging [59.984134070735934]
In particular, face datasets are typically biased in terms of attributes such as gender, age, and race.
We propose a novel, generative style-based architecture for data augmentation that captures fine-grained aging patterns.
We show that the proposed method outperforms state-of-the-art algorithms for age transfer.
arXiv Detail & Related papers (2020-06-06T21:53:44Z) - Face Recognition: Too Bias, or Not Too Bias? [45.404162391012726]
We reveal critical insights into problems of bias in state-of-the-art facial recognition systems.
We show variations in the optimal scoring threshold for face-pairs across different subgroups.
We also do a human evaluation to measure the bias in humans, which supports the hypothesis that such bias exists in human perception.
arXiv Detail & Related papers (2020-02-16T01:08:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.