Consistent Instance False Positive Improves Fairness in Face Recognition
- URL: http://arxiv.org/abs/2106.05519v1
- Date: Thu, 10 Jun 2021 06:20:37 GMT
- Title: Consistent Instance False Positive Improves Fairness in Face Recognition
- Authors: Xingkun Xu, Yuge Huang, Pengcheng Shen, Shaoxin Li, Jilin Li, Feiyue
Huang, Yong Li, Zhen Cui
- Abstract summary: Existing methods heavily rely on accurate demographic annotations.
These methods are typically designed for a specific demographic group and are not general enough.
We propose a false positive rate penalty loss, which mitigates face recognition bias by increasing the consistency of instance False Positive Rate.
- Score: 46.55971583252501
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Demographic bias is a significant challenge in practical face recognition
systems. Existing methods heavily rely on accurate demographic annotations.
However, such annotations are usually unavailable in real scenarios. Moreover,
these methods are typically designed for a specific demographic group and are
not general enough. In this paper, we propose a false positive rate penalty
loss, which mitigates face recognition bias by increasing the consistency of
instance False Positive Rate (FPR). Specifically, we first define the instance
FPR as the ratio between the number of the non-target similarities above a
unified threshold and the total number of the non-target similarities. The
unified threshold is estimated for a given total FPR. Then, an additional
penalty term, which is in proportion to the ratio of instance FPR overall FPR,
is introduced into the denominator of the softmax-based loss. The larger the
instance FPR, the larger the penalty. By such unequal penalties, the instance
FPRs are supposed to be consistent. Compared with the previous debiasing
methods, our method requires no demographic annotations. Thus, it can mitigate
the bias among demographic groups divided by various attributes, and these
attributes are not needed to be previously predefined during training.
Extensive experimental results on popular benchmarks demonstrate the
superiority of our method over state-of-the-art competitors. Code and trained
models are available at https://github.com/Tencent/TFace.
Related papers
- Score Normalization for Demographic Fairness in Face Recognition [16.421833444307232]
Well-known sample-centered score normalization techniques, Z-norm and T-norm, do not improve fairness for high-security operating points.
We extend the standard Z/T-norm to integrate demographic information in normalization.
We show that our techniques generally improve the overall fairness of five state-of-the-art pre-trained face recognition networks.
arXiv Detail & Related papers (2024-07-19T07:51:51Z) - Distributionally Generative Augmentation for Fair Facial Attribute Classification [69.97710556164698]
Facial Attribute Classification (FAC) holds substantial promise in widespread applications.
FAC models trained by traditional methodologies can be unfair by exhibiting accuracy inconsistencies across varied data subpopulations.
This work proposes a novel, generation-based two-stage framework to train a fair FAC model on biased data without additional annotation.
arXiv Detail & Related papers (2024-03-11T10:50:53Z) - Mitigating Gender Bias in Face Recognition Using the von Mises-Fisher
Mixture Model [7.049738935364298]
In this work, we investigate the gender bias of deep Face Recognition networks.
Motivated by geometric considerations, we mitigate gender bias through a new post-processing methodology.
In fact, extensive numerical experiments on a variety of datasets show that a careful selection significantly reduces gender bias.
arXiv Detail & Related papers (2022-10-24T23:53:56Z) - Debiasing Neural Retrieval via In-batch Balancing Regularization [25.941718123899356]
We develop a differentiable textitnormed Pairwise Ranking Fairness (nPRF) and leverage the T-statistics on top of nPRF to improve fairness.
Our method with nPRF achieves significantly less bias with minimal degradation in ranking performance compared with the baseline.
arXiv Detail & Related papers (2022-05-18T22:57:15Z) - Domain Adaptation meets Individual Fairness. And they get along [48.95808607591299]
We show that algorithmic fairness interventions can help machine learning models overcome distribution shifts.
In particular, we show that enforcing suitable notions of individual fairness (IF) can improve the out-of-distribution accuracy of ML models.
arXiv Detail & Related papers (2022-05-01T16:19:55Z) - Cross Pairwise Ranking for Unbiased Item Recommendation [57.71258289870123]
We develop a new learning paradigm named Cross Pairwise Ranking (CPR)
CPR achieves unbiased recommendation without knowing the exposure mechanism.
We prove in theory that this way offsets the influence of user/item propensity on the learning.
arXiv Detail & Related papers (2022-04-26T09:20:27Z) - Balancing out Bias: Achieving Fairness Through Training Reweighting [58.201275105195485]
Bias in natural language processing arises from models learning characteristics of the author such as gender and race.
Existing methods for mitigating and measuring bias do not directly account for correlations between author demographics and linguistic variables.
This paper introduces a very simple but highly effective method for countering bias using instance reweighting.
arXiv Detail & Related papers (2021-09-16T23:40:28Z) - Balancing Biases and Preserving Privacy on Balanced Faces in the Wild [50.915684171879036]
There are demographic biases present in current facial recognition (FR) models.
We introduce our Balanced Faces in the Wild dataset to measure these biases across different ethnic and gender subgroups.
We find that relying on a single score threshold to differentiate between genuine and imposters sample pairs leads to suboptimal results.
We propose a novel domain adaptation learning scheme that uses facial features extracted from state-of-the-art neural networks.
arXiv Detail & Related papers (2021-03-16T15:05:49Z) - Post-Comparison Mitigation of Demographic Bias in Face Recognition Using
Fair Score Normalization [15.431761867166]
We propose a novel unsupervised fair score normalization approach to reduce the effect of bias in face recognition.
Our solution reduces demographic biases by up to 82.7% in the case when gender is considered.
In contrast to previous works, our fair normalization approach enhances the overall performance by up to 53.2% at false match rate of 0.001 and up to 82.9% at a false match rate of 0.00001.
arXiv Detail & Related papers (2020-02-10T08:17:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.