Fairness Under Cover: Evaluating the Impact of Occlusions on Demographic Bias in Facial Recognition
- URL: http://arxiv.org/abs/2408.10175v1
- Date: Mon, 19 Aug 2024 17:34:19 GMT
- Title: Fairness Under Cover: Evaluating the Impact of Occlusions on Demographic Bias in Facial Recognition
- Authors: Rafael M. Mamede, Pedro C. Neto, Ana F. Sequeira,
- Abstract summary: We evaluate the effect on the performance of face recognition models trained on the BUPT-Balanced and BUPT-GlobalFace datasets.
We propose a new metric, Face Occlusion Impact Ratio (FOIR), that quantifies the extent to which occlusions affect model performance across different demographic groups.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study investigates the effects of occlusions on the fairness of face recognition systems, particularly focusing on demographic biases. Using the Racial Faces in the Wild (RFW) dataset and synthetically added realistic occlusions, we evaluate their effect on the performance of face recognition models trained on the BUPT-Balanced and BUPT-GlobalFace datasets. We note increases in the dispersion of FMR, FNMR, and accuracy alongside decreases in fairness according to Equilized Odds, Demographic Parity, STD of Accuracy, and Fairness Discrepancy Rate. Additionally, we utilize a pixel attribution method to understand the importance of occlusions in model predictions, proposing a new metric, Face Occlusion Impact Ratio (FOIR), that quantifies the extent to which occlusions affect model performance across different demographic groups. Our results indicate that occlusions exacerbate existing demographic biases, with models placing higher importance on occlusions in an unequal fashion, particularly affecting African individuals more severely.
Related papers
- Thinking Racial Bias in Fair Forgery Detection: Models, Datasets and Evaluations [63.52709761339949]
We first contribute a dedicated dataset called the Fair Forgery Detection (FairFD) dataset, where we prove the racial bias of public state-of-the-art (SOTA) methods.
We design novel metrics including Approach Averaged Metric and Utility Regularized Metric, which can avoid deceptive results.
We also present an effective and robust post-processing technique, Bias Pruning with Fair Activations (BPFA), which improves fairness without requiring retraining or weight updates.
arXiv Detail & Related papers (2024-07-19T14:53:18Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Toward Fair Facial Expression Recognition with Improved Distribution
Alignment [19.442685015494316]
We present a novel approach to mitigate bias in facial expression recognition (FER) models.
Our method aims to reduce sensitive attribute information such as gender, age, or race, in the embeddings produced by FER models.
For the first time, we analyze the notion of attractiveness as an important sensitive attribute in FER models and demonstrate that FER models can indeed exhibit biases towards more attractive faces.
arXiv Detail & Related papers (2023-06-11T14:59:20Z) - Bayesian Networks for the robust and unbiased prediction of depression
and its symptoms utilizing speech and multimodal data [65.28160163774274]
We apply a Bayesian framework to capture the relationships between depression, depression symptoms, and features derived from speech, facial expression and cognitive game data collected at thymia.
arXiv Detail & Related papers (2022-11-09T14:48:13Z) - Enhancing Fairness of Visual Attribute Predictors [6.6424782986402615]
We introduce fairness-aware regularization losses based on batch estimates of Demographic Parity, Equalized Odds, and a novel Intersection-over-Union measure.
Our work is the first attempt to incorporate these types of losses in an end-to-end training scheme for mitigating biases of visual attribute predictors.
arXiv Detail & Related papers (2022-07-07T15:02:04Z) - Fair SA: Sensitivity Analysis for Fairness in Face Recognition [1.7149364927872013]
We propose a new fairness evaluation based on robustness in the form of a generic framework.
We analyze the performance of common face recognition models and empirically show that certain subgroups are at a disadvantage when images are perturbed.
arXiv Detail & Related papers (2022-02-08T01:16:09Z) - Unravelling the Effect of Image Distortions for Biased Prediction of
Pre-trained Face Recognition Models [86.79402670904338]
We evaluate the performance of four state-of-the-art deep face recognition models in the presence of image distortions.
We have observed that image distortions have a relationship with the performance gap of the model across different subgroups.
arXiv Detail & Related papers (2021-08-14T16:49:05Z) - Reliability and Validity of Image-Based and Self-Reported Skin Phenotype
Metrics [0.0]
We show that measures of skin-tone for biometric performance evaluations must come from objective, characterized, and controlled sources.
Results demonstrate that measures of skin-tone for biometric performance evaluations must come from objective, characterized, and controlled sources.
arXiv Detail & Related papers (2021-06-18T16:12:24Z) - Balancing Biases and Preserving Privacy on Balanced Faces in the Wild [50.915684171879036]
There are demographic biases present in current facial recognition (FR) models.
We introduce our Balanced Faces in the Wild dataset to measure these biases across different ethnic and gender subgroups.
We find that relying on a single score threshold to differentiate between genuine and imposters sample pairs leads to suboptimal results.
We propose a novel domain adaptation learning scheme that uses facial features extracted from state-of-the-art neural networks.
arXiv Detail & Related papers (2021-03-16T15:05:49Z) - Investigating Bias and Fairness in Facial Expression Recognition [15.45073173331206]
We compare three approaches to bias and fairness in facial expression recognition.
Data augmentation improves the accuracy of the baseline model, but this alone is unable to mitigate the bias effect.
The disentangled approach is the best for mitigating demographic bias.
arXiv Detail & Related papers (2020-07-20T13:12:53Z) - Mitigating Face Recognition Bias via Group Adaptive Classifier [53.15616844833305]
This work aims to learn a fair face representation, where faces of every group could be more equally represented.
Our work is able to mitigate face recognition bias across demographic groups while maintaining the competitive accuracy.
arXiv Detail & Related papers (2020-06-13T06:43:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.