Unravelling the Effect of Image Distortions for Biased Prediction of
Pre-trained Face Recognition Models
- URL: http://arxiv.org/abs/2108.06581v1
- Date: Sat, 14 Aug 2021 16:49:05 GMT
- Title: Unravelling the Effect of Image Distortions for Biased Prediction of
Pre-trained Face Recognition Models
- Authors: Puspita Majumdar, Surbhi Mittal, Richa Singh, Mayank Vatsa
- Abstract summary: We evaluate the performance of four state-of-the-art deep face recognition models in the presence of image distortions.
We have observed that image distortions have a relationship with the performance gap of the model across different subgroups.
- Score: 86.79402670904338
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Identifying and mitigating bias in deep learning algorithms has gained
significant popularity in the past few years due to its impact on the society.
Researchers argue that models trained on balanced datasets with good
representation provide equal and unbiased performance across subgroups.
However, \textit{can seemingly unbiased pre-trained model become biased when
input data undergoes certain distortions?} For the first time, we attempt to
answer this question in the context of face recognition. We provide a
systematic analysis to evaluate the performance of four state-of-the-art deep
face recognition models in the presence of image distortions across different
\textit{gender} and \textit{race} subgroups. We have observed that image
distortions have a relationship with the performance gap of the model across
different subgroups.
Related papers
- Dataset Scale and Societal Consistency Mediate Facial Impression Bias in Vision-Language AI [17.101569078791492]
We study 43 CLIP vision-language models to determine whether they learn human-like facial impression biases.
We show for the first time that the the degree to which a bias is shared across a society predicts the degree to which it is reflected in a CLIP model.
arXiv Detail & Related papers (2024-08-04T08:26:58Z) - Classes Are Not Equal: An Empirical Study on Image Recognition Fairness [100.36114135663836]
We experimentally demonstrate that classes are not equal and the fairness issue is prevalent for image classification models across various datasets.
Our findings reveal that models tend to exhibit greater prediction biases for classes that are more challenging to recognize.
Data augmentation and representation learning algorithms improve overall performance by promoting fairness to some degree in image classification.
arXiv Detail & Related papers (2024-02-28T07:54:50Z) - Debiasing Vision-Language Models via Biased Prompts [79.04467131711775]
We propose a general approach for debiasing vision-language foundation models by projecting out biased directions in the text embedding.
We show that debiasing only the text embedding with a calibrated projection matrix suffices to yield robust classifiers and fair generative models.
arXiv Detail & Related papers (2023-01-31T20:09:33Z) - Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face
Recognition [107.58227666024791]
Face recognition systems are widely deployed in safety-critical applications, including law enforcement.
They exhibit bias across a range of socio-demographic dimensions, such as gender and race.
Previous works on bias mitigation largely focused on pre-processing the training data.
arXiv Detail & Related papers (2022-10-18T15:46:05Z) - Fair SA: Sensitivity Analysis for Fairness in Face Recognition [1.7149364927872013]
We propose a new fairness evaluation based on robustness in the form of a generic framework.
We analyze the performance of common face recognition models and empirically show that certain subgroups are at a disadvantage when images are perturbed.
arXiv Detail & Related papers (2022-02-08T01:16:09Z) - Visual Recognition with Deep Learning from Biased Image Datasets [6.10183951877597]
We show how biasing models can be applied to remedy problems in the context of visual recognition.
Based on the (approximate) knowledge of the biasing mechanisms at work, our approach consists in reweighting the observations.
We propose to use a low dimensional image representation, shared across the image databases.
arXiv Detail & Related papers (2021-09-06T10:56:58Z) - Understanding Gender and Racial Disparities in Image Recognition Models [0.0]
We investigate a multi-label softmax loss with cross-entropy as the loss function instead of a binary cross-entropy on a multi-label classification problem.
We use the MR2 dataset to evaluate the fairness in the model outcomes and try to interpret the mistakes by looking at model activations and suggest possible fixes.
arXiv Detail & Related papers (2021-07-20T01:05:31Z) - Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations [71.00754846434744]
We show that imperceptible additive perturbations can significantly alter the disparity map.
We show that, when used for adversarial data augmentation, our perturbations result in trained models that are more robust.
arXiv Detail & Related papers (2020-09-21T19:20:09Z) - InsideBias: Measuring Bias in Deep Networks and Application to Face
Gender Biometrics [73.85525896663371]
This work explores the biases in learning processes based on deep neural network architectures.
We employ two gender detection models based on popular deep neural networks.
We propose InsideBias, a novel method to detect biased models.
arXiv Detail & Related papers (2020-04-14T15:20:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.