Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face
Recognition
- URL: http://arxiv.org/abs/2210.09943v3
- Date: Wed, 6 Dec 2023 19:51:43 GMT
- Title: Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face
Recognition
- Authors: Samuel Dooley, Rhea Sanjay Sukthanker, John P. Dickerson, Colin White,
Frank Hutter, Micah Goldblum
- Abstract summary: Face recognition systems are widely deployed in safety-critical applications, including law enforcement.
They exhibit bias across a range of socio-demographic dimensions, such as gender and race.
Previous works on bias mitigation largely focused on pre-processing the training data.
- Score: 107.58227666024791
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face recognition systems are widely deployed in safety-critical applications,
including law enforcement, yet they exhibit bias across a range of
socio-demographic dimensions, such as gender and race. Conventional wisdom
dictates that model biases arise from biased training data. As a consequence,
previous works on bias mitigation largely focused on pre-processing the
training data, adding penalties to prevent bias from effecting the model during
training, or post-processing predictions to debias them, yet these approaches
have shown limited success on hard problems such as face recognition. In our
work, we discover that biases are actually inherent to neural network
architectures themselves. Following this reframing, we conduct the first neural
architecture search for fairness, jointly with a search for hyperparameters.
Our search outputs a suite of models which Pareto-dominate all other
high-performance architectures and existing bias mitigation methods in terms of
accuracy and fairness, often by large margins, on the two most widely used
datasets for face identification, CelebA and VGGFace2. Furthermore, these
models generalize to other datasets and sensitive attributes. We release our
code, models and raw data files at https://github.com/dooleys/FR-NAS.
Related papers
- Fast Model Debias with Machine Unlearning [54.32026474971696]
Deep neural networks might behave in a biased manner in many real-world scenarios.
Existing debiasing methods suffer from high costs in bias labeling or model re-training.
We propose a fast model debiasing framework (FMD) which offers an efficient approach to identify, evaluate and remove biases.
arXiv Detail & Related papers (2023-10-19T08:10:57Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Debiasing Vision-Language Models via Biased Prompts [79.04467131711775]
We propose a general approach for debiasing vision-language foundation models by projecting out biased directions in the text embedding.
We show that debiasing only the text embedding with a calibrated projection matrix suffices to yield robust classifiers and fair generative models.
arXiv Detail & Related papers (2023-01-31T20:09:33Z) - Look Beyond Bias with Entropic Adversarial Data Augmentation [4.893694715581673]
Deep neural networks do not discriminate between spurious and causal patterns, and will only learn the most predictive ones while ignoring the others.
Debiasing methods were developed to make networks robust to such spurious biases but require to know in advance if a dataset is biased.
In this paper, we argue that such samples should not be necessarily needed because the ''hidden'' causal information is often also contained in biased images.
arXiv Detail & Related papers (2023-01-10T08:25:24Z) - Addressing Bias in Face Detectors using Decentralised Data collection
with incentives [0.0]
We show how this data-centric approach can be facilitated in a decentralized manner to enable efficient data collection for algorithms.
We propose a face detection and anonymization approach using a hybrid MultiTask Cascaded CNN with FaceNet Embeddings.
arXiv Detail & Related papers (2022-10-28T09:54:40Z) - Unsupervised Learning of Unbiased Visual Representations [10.871587311621974]
Deep neural networks are known for their inability to learn robust representations when biases exist in the dataset.
We propose a fully unsupervised debiasing framework, consisting of three steps.
We employ state-of-the-art supervised debiasing techniques to obtain an unbiased model.
arXiv Detail & Related papers (2022-04-26T10:51:50Z) - Unravelling the Effect of Image Distortions for Biased Prediction of
Pre-trained Face Recognition Models [86.79402670904338]
We evaluate the performance of four state-of-the-art deep face recognition models in the presence of image distortions.
We have observed that image distortions have a relationship with the performance gap of the model across different subgroups.
arXiv Detail & Related papers (2021-08-14T16:49:05Z) - Learning from Failure: Training Debiased Classifier from Biased
Classifier [76.52804102765931]
We show that neural networks learn to rely on spurious correlation only when it is "easier" to learn than the desired knowledge.
We propose a failure-based debiasing scheme by training a pair of neural networks simultaneously.
Our method significantly improves the training of the network against various types of biases in both synthetic and real-world datasets.
arXiv Detail & Related papers (2020-07-06T07:20:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.