Two-Face: Adversarial Audit of Commercial Face Recognition Systems
- URL: http://arxiv.org/abs/2111.09137v1
- Date: Wed, 17 Nov 2021 14:21:23 GMT
- Title: Two-Face: Adversarial Audit of Commercial Face Recognition Systems
- Authors: Siddharth D Jaiswal, Karthikeya Duggirala, Abhisek Dash, Animesh
Mukherjee
- Abstract summary: Computer vision applications tend to be biased against minority groups which result in unfair and concerning societal and political outcomes.
We perform an extensive adversarial audit on multiple systems and datasets, making a number of concerning observations.
We conclude with a discussion on the broader societal impacts in light of these observations and a few suggestions on how to collectively deal with this issue.
- Score: 6.684965883341269
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Computer vision applications like automated face detection are used for a
variety of purposes ranging from unlocking smart devices to tracking potential
persons of interest for surveillance. Audits of these applications have
revealed that they tend to be biased against minority groups which result in
unfair and concerning societal and political outcomes. Despite multiple studies
over time, these biases have not been mitigated completely and have in fact
increased for certain tasks like age prediction. While such systems are audited
over benchmark datasets, it becomes necessary to evaluate their robustness for
adversarial inputs. In this work, we perform an extensive adversarial audit on
multiple systems and datasets, making a number of concerning observations -
there has been a drop in accuracy for some tasks on CELEBSET dataset since a
previous audit. While there still exists a bias in accuracy against individuals
from minority groups for multiple datasets, a more worrying observation is that
these biases tend to get exorbitantly pronounced with adversarial inputs toward
the minority group. We conclude with a discussion on the broader societal
impacts in light of these observations and a few suggestions on how to
collectively deal with this issue.
Related papers
- Auditing for Bias in Ad Delivery Using Inferred Demographic Attributes [50.37313459134418]
We study the effects of inference error on auditing for bias in one prominent application: black-box audit of ad delivery using paired ads.
We propose a way to mitigate the inference error when evaluating skew in ad delivery algorithms.
arXiv Detail & Related papers (2024-10-30T18:57:03Z) - Reducing annotator bias by belief elicitation [3.0040661953201475]
We propose a simple method for handling bias in annotations without requirements on the number of annotators or instances.
We ask annotators about their beliefs of other annotators' judgements of an instance, under the hypothesis that these beliefs may provide more representative labels than judgements.
The results indicate that bias, defined as systematic differences between the two groups of annotators, is consistently reduced when asking for beliefs instead of judgements.
arXiv Detail & Related papers (2024-10-21T07:44:01Z) - The Impact of Differential Feature Under-reporting on Algorithmic Fairness [86.275300739926]
We present an analytically tractable model of differential feature under-reporting.
We then use to characterize the impact of this kind of data bias on algorithmic fairness.
Our results show that, in real world data settings, under-reporting typically leads to increasing disparities.
arXiv Detail & Related papers (2024-01-16T19:16:22Z) - Measuring Adversarial Datasets [28.221635644616523]
Researchers have curated various adversarial datasets for capturing model deficiencies that cannot be revealed in standard benchmark datasets.
There is still no methodology to measure the intended and unintended consequences of those adversarial transformations.
We conducted a systematic survey of existing quantifiable metrics that describe text instances in NLP tasks.
arXiv Detail & Related papers (2023-11-06T22:08:16Z) - ICON$^2$: Reliably Benchmarking Predictive Inequity in Object Detection [23.419153864862174]
Concerns about social bias in computer vision systems are rising.
We introduce ICON$2$, a framework for robustly answering this question.
We conduct an in-depth study on the performance of object detection with respect to income from the BDD100K driving dataset.
arXiv Detail & Related papers (2023-06-07T17:42:42Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Causal Scene BERT: Improving object detection by searching for
challenging groups of data [125.40669814080047]
Computer vision applications rely on learning-based perception modules parameterized with neural networks for tasks like object detection.
These modules frequently have low expected error overall but high error on atypical groups of data due to biases inherent in the training process.
Our main contribution is a pseudo-automatic method to discover such groups in foresight by performing causal interventions on simulated scenes.
arXiv Detail & Related papers (2022-02-08T05:14:16Z) - Comparing Human and Machine Bias in Face Recognition [46.170389064229354]
We release improvements to the LFW and CelebA datasets which will enable future researchers to obtain measurements of algorithmic bias.
We also use these new data to develop a series of challenging facial identification and verification questions.
We find that both computer models and human survey participants perform significantly better at the verification task.
arXiv Detail & Related papers (2021-10-15T22:26:20Z) - Unsupervised Learning of Debiased Representations with Pseudo-Attributes [85.5691102676175]
We propose a simple but effective debiasing technique in an unsupervised manner.
We perform clustering on the feature embedding space and identify pseudoattributes by taking advantage of the clustering results.
We then employ a novel cluster-based reweighting scheme for learning debiased representation.
arXiv Detail & Related papers (2021-08-06T05:20:46Z) - Towards Unbiased Visual Emotion Recognition via Causal Intervention [63.74095927462]
We propose a novel Emotion Recognition Network (IERN) to alleviate the negative effects brought by the dataset bias.
A series of designed tests validate the effectiveness of IERN, and experiments on three emotion benchmarks demonstrate that IERN outperforms other state-of-the-art approaches.
arXiv Detail & Related papers (2021-07-26T10:40:59Z) - Improving Fairness of AI Systems with Lossless De-biasing [15.039284892391565]
Mitigating bias in AI systems to increase overall fairness has emerged as an important challenge.
We present an information-lossless de-biasing technique that targets the scarcity of data in the disadvantaged group.
arXiv Detail & Related papers (2021-05-10T17:38:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.