Men Also Do Laundry: Multi-Attribute Bias Amplification
- URL: http://arxiv.org/abs/2210.11924v3
- Date: Tue, 30 May 2023 15:38:36 GMT
- Title: Men Also Do Laundry: Multi-Attribute Bias Amplification
- Authors: Dora Zhao, Jerone T.A. Andrews, Alice Xiang
- Abstract summary: Computer vision systems are not only reproducing but amplifying harmful social biases.
We propose a new metric: Multi-Attribute Bias Amplification.
We validate our proposed metric through an analysis of gender bias amplification on the COCO and imSitu datasets.
- Score: 2.492300648514129
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: As computer vision systems become more widely deployed, there is increasing
concern from both the research community and the public that these systems are
not only reproducing but amplifying harmful social biases. The phenomenon of
bias amplification, which is the focus of this work, refers to models
amplifying inherent training set biases at test time. Existing metrics measure
bias amplification with respect to single annotated attributes (e.g.,
$\texttt{computer}$). However, several visual datasets consist of images with
multiple attribute annotations. We show models can learn to exploit
correlations with respect to multiple attributes (e.g., {$\texttt{computer}$,
$\texttt{keyboard}$}), which are not accounted for by current metrics. In
addition, we show current metrics can give the erroneous impression that
minimal or no bias amplification has occurred as they involve aggregating over
positive and negative values. Further, these metrics lack a clear desired
value, making them difficult to interpret. To address these shortcomings, we
propose a new metric: Multi-Attribute Bias Amplification. We validate our
proposed metric through an analysis of gender bias amplification on the COCO
and imSitu datasets. Finally, we benchmark bias mitigation methods using our
proposed metric, suggesting possible avenues for future bias mitigation
Related papers
- Gender Biases in Automatic Evaluation Metrics for Image Captioning [87.15170977240643]
We conduct a systematic study of gender biases in model-based evaluation metrics for image captioning tasks.
We demonstrate the negative consequences of using these biased metrics, including the inability to differentiate between biased and unbiased generations.
We present a simple and effective way to mitigate the metric bias without hurting the correlations with human judgments.
arXiv Detail & Related papers (2023-05-24T04:27:40Z) - Metrics for Dataset Demographic Bias: A Case Study on Facial Expression Recognition [4.336779198334903]
One of the most prominent types of demographic bias are statistical imbalances in the representation of demographic groups in the datasets.
We develop a taxonomy for the classification of these metrics, providing a practical guide for the selection of appropriate metrics.
The paper provides valuable insights for researchers in AI and related fields to mitigate dataset bias and improve the fairness and accuracy of AI models.
arXiv Detail & Related papers (2023-03-28T11:04:18Z) - Look Beyond Bias with Entropic Adversarial Data Augmentation [4.893694715581673]
Deep neural networks do not discriminate between spurious and causal patterns, and will only learn the most predictive ones while ignoring the others.
Debiasing methods were developed to make networks robust to such spurious biases but require to know in advance if a dataset is biased.
In this paper, we argue that such samples should not be necessarily needed because the ''hidden'' causal information is often also contained in biased images.
arXiv Detail & Related papers (2023-01-10T08:25:24Z) - Self-supervised debiasing using low rank regularization [59.84695042540525]
Spurious correlations can cause strong biases in deep neural networks, impairing generalization ability.
We propose a self-supervised debiasing framework potentially compatible with unlabeled samples.
Remarkably, the proposed debiasing framework significantly improves the generalization performance of self-supervised learning baselines.
arXiv Detail & Related papers (2022-10-11T08:26:19Z) - Bias Mimicking: A Simple Sampling Approach for Bias Mitigation [57.17709477668213]
We introduce a new class-conditioned sampling method: Bias Mimicking.
Bias Mimicking improves underrepresented groups' accuracy of sampling methods by 3% over four benchmarks.
arXiv Detail & Related papers (2022-09-30T17:33:00Z) - Few-shot Instruction Prompts for Pretrained Language Models to Detect
Social Biases [55.45617404586874]
We propose a few-shot instruction-based method for prompting pre-trained language models (LMs)
We show that large LMs can detect different types of fine-grained biases with similar and sometimes superior accuracy to fine-tuned models.
arXiv Detail & Related papers (2021-12-15T04:19:52Z) - Measure Twice, Cut Once: Quantifying Bias and Fairness in Deep Neural
Networks [7.763173131630868]
We propose two metrics to quantitatively evaluate the class-wise bias of two models in comparison to one another.
By evaluating the performance of these new metrics and by demonstrating their practical application, we show that they can be used to measure fairness as well as bias.
arXiv Detail & Related papers (2021-10-08T22:35:34Z) - Evading the Simplicity Bias: Training a Diverse Set of Models Discovers
Solutions with Superior OOD Generalization [93.8373619657239]
Neural networks trained with SGD were recently shown to rely preferentially on linearly-predictive features.
This simplicity bias can explain their lack of robustness out of distribution (OOD)
We demonstrate that the simplicity bias can be mitigated and OOD generalization improved.
arXiv Detail & Related papers (2021-05-12T12:12:24Z) - Directional Bias Amplification [21.482317675176443]
Bias amplification is the tendency of models to amplify the biases present in the data they are trained on.
A metric for measuring bias amplification was introduced in the seminal work by Zhao et al.
We introduce and analyze a new, decoupled metric for measuring bias amplification, $textBiasAmp_rightarrow$ (Directional Bias Amplification)
arXiv Detail & Related papers (2021-02-24T22:54:21Z) - Mitigating Gender Bias Amplification in Distribution by Posterior
Regularization [75.3529537096899]
We investigate the gender bias amplification issue from the distribution perspective.
We propose a bias mitigation approach based on posterior regularization.
Our study sheds the light on understanding the bias amplification.
arXiv Detail & Related papers (2020-05-13T11:07:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.