Component-Based Fairness in Face Attribute Classification with Bayesian Network-informed Meta Learning
- URL: http://arxiv.org/abs/2505.01699v1
- Date: Sat, 03 May 2025 05:26:29 GMT
- Title: Component-Based Fairness in Face Attribute Classification with Bayesian Network-informed Meta Learning
- Authors: Yifan Liu, Ruichen Yao, Yaokun Liu, Ruohan Zong, Zelin Li, Yang Zhang, Dong Wang,
- Abstract summary: We focus on face component fairness, a fairness notion defined by biological face features.<n>We propose textbfBayesian textbfNetwork-informed textbfMeta textbfReweighting (BNMR)<n>BNMR incorporates a Bayesian Network calibrator to guide an adaptive meta-learning-based sample reweighting process.
- Score: 12.863447377767182
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The widespread integration of face recognition technologies into various applications (e.g., access control and personalized advertising) necessitates a critical emphasis on fairness. While previous efforts have focused on demographic fairness, the fairness of individual biological face components remains unexplored. In this paper, we focus on face component fairness, a fairness notion defined by biological face features. To our best knowledge, our work is the first work to mitigate bias of face attribute prediction at the biological feature level. In this work, we identify two key challenges in optimizing face component fairness: attribute label scarcity and attribute inter-dependencies, both of which limit the effectiveness of bias mitigation from previous approaches. To address these issues, we propose \textbf{B}ayesian \textbf{N}etwork-informed \textbf{M}eta \textbf{R}eweighting (BNMR), which incorporates a Bayesian Network calibrator to guide an adaptive meta-learning-based sample reweighting process. During the training process of our approach, the Bayesian Network calibrator dynamically tracks model bias and encodes prior probabilities for face component attributes to overcome the above challenges. To demonstrate the efficacy of our approach, we conduct extensive experiments on a large-scale real-world human face dataset. Our results show that BNMR is able to consistently outperform recent face bias mitigation baselines. Moreover, our results suggest a positive impact of face component fairness on the commonly considered demographic fairness (e.g., \textit{gender}). Our findings pave the way for new research avenues on face component fairness, suggesting that face component fairness could serve as a potential surrogate objective for demographic fairness. The code for our work is publicly available~\footnote{https://github.com/yliuaa/BNMR-FairCompFace.git}.
Related papers
- On the "Illusion" of Gender Bias in Face Recognition: Explaining the Fairness Issue Through Non-demographic Attributes [7.602456562464879]
Face recognition systems exhibit significant accuracy differences based on the user's gender.<n>We propose a toolchain to effectively decorrelate and aggregate facial attributes to enable a less-biased gender analysis.<n>Experiments show that the gender gap vanishes when images of male and female subjects share specific attributes.
arXiv Detail & Related papers (2025-01-21T10:21:19Z) - Fairer Analysis and Demographically Balanced Face Generation for Fairer Face Verification [69.04239222633795]
Face recognition and verification are two computer vision tasks whose performances have advanced with the introduction of deep representations.<n>Ethical, legal, and technical challenges due to the sensitive nature of face data and biases in real-world training datasets hinder their development.<n>We introduce a new controlled generation pipeline that improves fairness.
arXiv Detail & Related papers (2024-12-04T14:30:19Z) - FineFACE: Fair Facial Attribute Classification Leveraging Fine-grained Features [3.9440964696313485]
Research highlights the presence of demographic bias in automated facial attribute classification algorithms.
Existing bias mitigation techniques typically require demographic annotations and often obtain a trade-off between fairness and accuracy.
This paper proposes a novel approach to fair facial attribute classification by framing it as a fine-grained classification problem.
arXiv Detail & Related papers (2024-08-29T20:08:22Z) - Generalized Face Liveness Detection via De-fake Face Generator [52.23271636362843]
Previous Face Anti-spoofing (FAS) methods face the challenge of generalizing to unseen domains.<n>We propose an Anomalous cue Guided FAS (AG-FAS) method, which can effectively leverage large-scale additional real faces.<n>Our method achieves state-of-the-art results under cross-domain evaluations with unseen scenarios and unknown presentation attacks.
arXiv Detail & Related papers (2024-01-17T06:59:32Z) - MixFairFace: Towards Ultimate Fairness via MixFair Adapter in Face
Recognition [37.756287362799945]
We argue that the commonly used attribute-based fairness metric is not appropriate for face recognition.
We propose a new evaluation protocol to fairly evaluate the fairness performance of different approaches.
Our MixFairFace approach achieves state-of-the-art fairness performance on all benchmark datasets.
arXiv Detail & Related papers (2022-11-28T09:47:21Z) - Are Face Detection Models Biased? [69.68854430664399]
We investigate possible bias in the domain of face detection through facial region localization.
Most existing face detection datasets lack suitable annotation for such analysis.
We observe a high disparity in detection accuracies across gender and skin-tone, and interplay of confounding factors beyond demography.
arXiv Detail & Related papers (2022-11-07T14:27:55Z) - Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face
Recognition [107.58227666024791]
Face recognition systems are widely deployed in safety-critical applications, including law enforcement.
They exhibit bias across a range of socio-demographic dimensions, such as gender and race.
Previous works on bias mitigation largely focused on pre-processing the training data.
arXiv Detail & Related papers (2022-10-18T15:46:05Z) - Enhancing Fairness of Visual Attribute Predictors [6.6424782986402615]
We introduce fairness-aware regularization losses based on batch estimates of Demographic Parity, Equalized Odds, and a novel Intersection-over-Union measure.
Our work is the first attempt to incorporate these types of losses in an end-to-end training scheme for mitigating biases of visual attribute predictors.
arXiv Detail & Related papers (2022-07-07T15:02:04Z) - Meta Balanced Network for Fair Face Recognition [51.813457201437195]
We systematically and scientifically study bias from both data and algorithm aspects.
We propose a novel meta-learning algorithm, called Meta Balanced Network (MBN), which learns adaptive margins in large margin loss.
Extensive experiments show that MBN successfully mitigates bias and learns more balanced performance for people with different skin tones in face recognition.
arXiv Detail & Related papers (2022-05-13T10:25:44Z) - Learning Fair Face Representation With Progressive Cross Transformer [79.73754444296213]
We propose a progressive cross transformer (PCT) method for fair face recognition.
We show that PCT is capable of mitigating bias in face recognition while achieving state-of-the-art FR performance.
arXiv Detail & Related papers (2021-08-11T01:31:14Z) - BioMetricNet: deep unconstrained face verification through learning of
metrics regularized onto Gaussian distributions [25.00475462213752]
We present BioMetricNet, a novel framework for deep unconstrained face verification.
The proposed approach does not impose any specific metric on facial features.
It shapes the decision space by learning a latent representation in which matching and non-matching pairs are mapped onto clearly separated and well-behaved target distributions.
arXiv Detail & Related papers (2020-08-13T17:22:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.