Mitigating Face Recognition Bias via Group Adaptive Classifier
- URL: http://arxiv.org/abs/2006.07576v2
- Date: Tue, 1 Dec 2020 04:18:39 GMT
- Title: Mitigating Face Recognition Bias via Group Adaptive Classifier
- Authors: Sixue Gong, Xiaoming Liu, and Anil K. Jain
- Abstract summary: This work aims to learn a fair face representation, where faces of every group could be more equally represented.
Our work is able to mitigate face recognition bias across demographic groups while maintaining the competitive accuracy.
- Score: 53.15616844833305
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face recognition is known to exhibit bias - subjects in a certain demographic
group can be better recognized than other groups. This work aims to learn a
fair face representation, where faces of every group could be more equally
represented. Our proposed group adaptive classifier mitigates bias by using
adaptive convolution kernels and attention mechanisms on faces based on their
demographic attributes. The adaptive module comprises kernel masks and
channel-wise attention maps for each demographic group so as to activate
different facial regions for identification, leading to more discriminative
features pertinent to their demographics. Our introduced automated adaptation
strategy determines whether to apply adaptation to a certain layer by
iteratively computing the dissimilarity among demographic-adaptive parameters.
A new de-biasing loss function is proposed to mitigate the gap of average
intra-class distance between demographic groups. Experiments on face benchmarks
(RFW, LFW, IJB-A, and IJB-C) show that our work is able to mitigate face
recognition bias across demographic groups while maintaining the competitive
accuracy.
Related papers
- LabellessFace: Fair Metric Learning for Face Recognition without Attribute Labels [0.11999555634662631]
This paper introduces LabellessFace'', a framework that improves demographic bias in face recognition without requiring demographic group labeling.
We propose a novel fairness enhancement metric called the class favoritism level, which assesses the extent of favoritism towards specific classes.
This method dynamically adjusts learning parameters based on class favoritism levels, promoting fairness across all attributes.
arXiv Detail & Related papers (2024-09-14T02:56:07Z) - FineFACE: Fair Facial Attribute Classification Leveraging Fine-grained Features [3.9440964696313485]
Research highlights the presence of demographic bias in automated facial attribute classification algorithms.
Existing bias mitigation techniques typically require demographic annotations and often obtain a trade-off between fairness and accuracy.
This paper proposes a novel approach to fair facial attribute classification by framing it as a fine-grained classification problem.
arXiv Detail & Related papers (2024-08-29T20:08:22Z) - Score Normalization for Demographic Fairness in Face Recognition [16.421833444307232]
Well-known sample-centered score normalization techniques, Z-norm and T-norm, do not improve fairness for high-security operating points.
We extend the standard Z/T-norm to integrate demographic information in normalization.
We show that our techniques generally improve the overall fairness of five state-of-the-art pre-trained face recognition networks.
arXiv Detail & Related papers (2024-07-19T07:51:51Z) - Invariant Feature Regularization for Fair Face Recognition [45.23154294914808]
We show that biased feature generalizes poorly in the minority group.
We propose to generate diverse data partitions iteratively in an unsupervised fashion.
INV-REG leads to new state-of-the-art that improves face recognition on a variety of demographic groups.
arXiv Detail & Related papers (2023-10-23T07:44:12Z) - Adaptive Face Recognition Using Adversarial Information Network [57.29464116557734]
Face recognition models often degenerate when training data are different from testing data.
We propose a novel adversarial information network (AIN) to address it.
arXiv Detail & Related papers (2023-05-23T02:14:11Z) - Deep face recognition with clustering based domain adaptation [57.29464116557734]
We propose a new clustering-based domain adaptation method designed for face recognition task in which the source and target domain do not share any classes.
Our method effectively learns the discriminative target feature by aligning the feature domain globally, and, at the meantime, distinguishing the target clusters locally.
arXiv Detail & Related papers (2022-05-27T12:29:11Z) - Learning Fair Face Representation With Progressive Cross Transformer [79.73754444296213]
We propose a progressive cross transformer (PCT) method for fair face recognition.
We show that PCT is capable of mitigating bias in face recognition while achieving state-of-the-art FR performance.
arXiv Detail & Related papers (2021-08-11T01:31:14Z) - Balancing Biases and Preserving Privacy on Balanced Faces in the Wild [50.915684171879036]
There are demographic biases present in current facial recognition (FR) models.
We introduce our Balanced Faces in the Wild dataset to measure these biases across different ethnic and gender subgroups.
We find that relying on a single score threshold to differentiate between genuine and imposters sample pairs leads to suboptimal results.
We propose a novel domain adaptation learning scheme that uses facial features extracted from state-of-the-art neural networks.
arXiv Detail & Related papers (2021-03-16T15:05:49Z) - Enhancing Facial Data Diversity with Style-based Face Aging [59.984134070735934]
In particular, face datasets are typically biased in terms of attributes such as gender, age, and race.
We propose a novel, generative style-based architecture for data augmentation that captures fine-grained aging patterns.
We show that the proposed method outperforms state-of-the-art algorithms for age transfer.
arXiv Detail & Related papers (2020-06-06T21:53:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.