Towards causal benchmarking of bias in face analysis algorithms
- URL: http://arxiv.org/abs/2007.06570v1
- Date: Mon, 13 Jul 2020 17:10:34 GMT
- Title: Towards causal benchmarking of bias in face analysis algorithms
- Authors: Guha Balakrishnan, Yuanjun Xiong, Wei Xia, Pietro Perona
- Abstract summary: We develop an experimental method for measuring algorithmic bias of face analysis algorithms.
Our proposed method is based on generating synthetic transects'' of matched sample images.
We validate our method by comparing it to a study that employs the traditional observational method for analyzing bias in gender classification algorithms.
- Score: 54.19499274513654
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Measuring algorithmic bias is crucial both to assess algorithmic fairness,
and to guide the improvement of algorithms. Current methods to measure
algorithmic bias in computer vision, which are based on observational datasets,
are inadequate for this task because they conflate algorithmic bias with
dataset bias.
To address this problem we develop an experimental method for measuring
algorithmic bias of face analysis algorithms, which manipulates directly the
attributes of interest, e.g., gender and skin tone, in order to reveal causal
links between attribute variation and performance change. Our proposed method
is based on generating synthetic ``transects'' of matched sample images that
are designed to differ along specific attributes while leaving other attributes
constant. A crucial aspect of our approach is relying on the perception of
human observers, both to guide manipulations, and to measure algorithmic bias.
Besides allowing the measurement of algorithmic bias, synthetic transects
have other advantages with respect to observational datasets: they sample
attributes more evenly allowing for more straightforward bias analysis on
minority and intersectional groups, they enable prediction of bias in new
scenarios, they greatly reduce ethical and legal challenges, and they are
economical and fast to obtain, helping make bias testing affordable and widely
available.
We validate our method by comparing it to a study that employs the
traditional observational method for analyzing bias in gender classification
algorithms. The two methods reach different conclusions. While the
observational method reports gender and skin color biases, the experimental
method reveals biases due to gender, hair length, age, and facial hair.
Related papers
- Outlier Detection Bias Busted: Understanding Sources of Algorithmic Bias through Data-centric Factors [28.869581543676947]
unsupervised outlier detection (OD) has numerous applications in finance, security, etc.
This work aims to shed light on the possible sources of unfairness in OD by auditing detection models under different data-centric factors.
We find that the OD algorithms under the study all exhibit fairness pitfalls, although differing in which types of data bias they are more susceptible to.
arXiv Detail & Related papers (2024-08-24T20:35:32Z) - Benchmarking Algorithmic Bias in Face Recognition: An Experimental
Approach Using Synthetic Faces and Human Evaluation [24.35436087740559]
We propose an experimental method for measuring bias in face recognition systems.
Our method is based on generating synthetic faces using a neural face generator.
We validate our method quantitatively by evaluating race and gender biases of three research-grade face recognition models.
arXiv Detail & Related papers (2023-08-10T08:57:31Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Information-Theoretic Bias Reduction via Causal View of Spurious
Correlation [71.9123886505321]
We propose an information-theoretic bias measurement technique through a causal interpretation of spurious correlation.
We present a novel debiasing framework against the algorithmic bias, which incorporates a bias regularization loss.
The proposed bias measurement and debiasing approaches are validated in diverse realistic scenarios.
arXiv Detail & Related papers (2022-01-10T01:19:31Z) - Anatomizing Bias in Facial Analysis [86.79402670904338]
Existing facial analysis systems have been shown to yield biased results against certain demographic subgroups.
It has become imperative to ensure that these systems do not discriminate based on gender, identity, or skin tone of individuals.
This has led to research in the identification and mitigation of bias in AI systems.
arXiv Detail & Related papers (2021-12-13T09:51:13Z) - Comparing Human and Machine Bias in Face Recognition [46.170389064229354]
We release improvements to the LFW and CelebA datasets which will enable future researchers to obtain measurements of algorithmic bias.
We also use these new data to develop a series of challenging facial identification and verification questions.
We find that both computer models and human survey participants perform significantly better at the verification task.
arXiv Detail & Related papers (2021-10-15T22:26:20Z) - Balancing out Bias: Achieving Fairness Through Training Reweighting [58.201275105195485]
Bias in natural language processing arises from models learning characteristics of the author such as gender and race.
Existing methods for mitigating and measuring bias do not directly account for correlations between author demographics and linguistic variables.
This paper introduces a very simple but highly effective method for countering bias using instance reweighting.
arXiv Detail & Related papers (2021-09-16T23:40:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.