Fair SA: Sensitivity Analysis for Fairness in Face Recognition
- URL: http://arxiv.org/abs/2202.03586v2
- Date: Wed, 9 Feb 2022 18:31:40 GMT
- Title: Fair SA: Sensitivity Analysis for Fairness in Face Recognition
- Authors: Aparna R. Joshi, Xavier Suau, Nivedha Sivakumar, Luca Zappella and
Nicholas Apostoloff
- Abstract summary: We propose a new fairness evaluation based on robustness in the form of a generic framework.
We analyze the performance of common face recognition models and empirically show that certain subgroups are at a disadvantage when images are perturbed.
- Score: 1.7149364927872013
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As the use of deep learning in high impact domains becomes ubiquitous, it is
increasingly important to assess the resilience of models. One such high impact
domain is that of face recognition, with real world applications involving
images affected by various degradations, such as motion blur or high exposure.
Moreover, images captured across different attributes, such as gender and race,
can also challenge the robustness of a face recognition algorithm. While
traditional summary statistics suggest that the aggregate performance of face
recognition models has continued to improve, these metrics do not directly
measure the robustness or fairness of the models. Visual Psychophysics
Sensitivity Analysis (VPSA) [1] provides a way to pinpoint the individual
causes of failure by way of introducing incremental perturbations in the data.
However, perturbations may affect subgroups differently. In this paper, we
propose a new fairness evaluation based on robustness in the form of a generic
framework that extends VPSA. With this framework, we can analyze the ability of
a model to perform fairly for different subgroups of a population affected by
perturbations, and pinpoint the exact failure modes for a subgroup by measuring
targeted robustness. With the increasing focus on the fairness of models, we
use face recognition as an example application of our framework and propose to
compactly visualize the fairness analysis of a model via AUC matrices. We
analyze the performance of common face recognition models and empirically show
that certain subgroups are at a disadvantage when images are perturbed, thereby
uncovering trends that were not visible using the model's performance on
subgroups without perturbations.
Related papers
- Fairness Under Cover: Evaluating the Impact of Occlusions on Demographic Bias in Facial Recognition [0.0]
We evaluate the effect on the performance of face recognition models trained on the BUPT-Balanced and BUPT-GlobalFace datasets.
We propose a new metric, Face Occlusion Impact Ratio (FOIR), that quantifies the extent to which occlusions affect model performance across different demographic groups.
arXiv Detail & Related papers (2024-08-19T17:34:19Z) - Counterfactual Image Generation for adversarially robust and
interpretable Classifiers [1.3859669037499769]
We propose a unified framework leveraging image-to-image translation Generative Adrial Networks (GANs) to produce counterfactual samples.
This is achieved by combining the classifier and discriminator into a single model that attributes real images to their respective classes and flags generated images as "fake"
We show how the model exhibits improved robustness to adversarial attacks, and we show how the discriminator's "fakeness" value serves as an uncertainty measure of the predictions.
arXiv Detail & Related papers (2023-10-01T18:50:29Z) - Recursive Counterfactual Deconfounding for Object Recognition [20.128093193861165]
We propose a Recursive Counterfactual Deconfounding model for object recognition in both closed-set and open-set scenarios.
We show that the proposed RCD model performs better than 11 state-of-the-art baselines significantly in most cases.
arXiv Detail & Related papers (2023-09-25T07:46:41Z) - Fairness Increases Adversarial Vulnerability [50.90773979394264]
This paper shows the existence of a dichotomy between fairness and robustness, and analyzes when achieving fairness decreases the model robustness to adversarial samples.
Experiments on non-linear models and different architectures validate the theoretical findings in multiple vision domains.
The paper proposes a simple, yet effective, solution to construct models achieving good tradeoffs between fairness and robustness.
arXiv Detail & Related papers (2022-11-21T19:55:35Z) - On the Robustness of Quality Measures for GANs [136.18799984346248]
This work evaluates the robustness of quality measures of generative models such as Inception Score (IS) and Fr'echet Inception Distance (FID)
We show that such metrics can also be manipulated by additive pixel perturbations.
arXiv Detail & Related papers (2022-01-31T06:43:09Z) - Harnessing Perceptual Adversarial Patches for Crowd Counting [92.79051296850405]
Crowd counting is vulnerable to adversarial examples in the physical world.
This paper proposes the Perceptual Adrial Patch (PAP) generation framework to learn the shared perceptual features between models.
arXiv Detail & Related papers (2021-09-16T13:51:39Z) - Unravelling the Effect of Image Distortions for Biased Prediction of
Pre-trained Face Recognition Models [86.79402670904338]
We evaluate the performance of four state-of-the-art deep face recognition models in the presence of image distortions.
We have observed that image distortions have a relationship with the performance gap of the model across different subgroups.
arXiv Detail & Related papers (2021-08-14T16:49:05Z) - Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations [71.00754846434744]
We show that imperceptible additive perturbations can significantly alter the disparity map.
We show that, when used for adversarial data augmentation, our perturbations result in trained models that are more robust.
arXiv Detail & Related papers (2020-09-21T19:20:09Z) - Face Anti-Spoofing Via Disentangled Representation Learning [90.90512800361742]
Face anti-spoofing is crucial to security of face recognition systems.
We propose a novel perspective of face anti-spoofing that disentangles the liveness features and content features from images.
arXiv Detail & Related papers (2020-08-19T03:54:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.