Studying Bias in GANs through the Lens of Race
- URL: http://arxiv.org/abs/2209.02836v1
- Date: Tue, 6 Sep 2022 22:25:56 GMT
- Title: Studying Bias in GANs through the Lens of Race
- Authors: Vongani H. Maluleke, Neerja Thakkar, Tim Brooks, Ethan Weber, Trevor
Darrell, Alexei A. Efros, Angjoo Kanazawa, Devin Guillory
- Abstract summary: We study how the performance and evaluation of generative image models are impacted by the racial composition of their training datasets.
Our results show that the racial compositions of generated images successfully preserve that of the training data.
However, we observe that truncation, a technique used to generate higher quality images during inference, exacerbates racial imbalances in the data.
- Score: 91.95264864405493
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we study how the performance and evaluation of generative image
models are impacted by the racial composition of their training datasets. By
examining and controlling the racial distributions in various training
datasets, we are able to observe the impacts of different training
distributions on generated image quality and the racial distributions of the
generated images. Our results show that the racial compositions of generated
images successfully preserve that of the training data. However, we observe
that truncation, a technique used to generate higher quality images during
inference, exacerbates racial imbalances in the data. Lastly, when examining
the relationship between image quality and race, we find that the highest
perceived visual quality images of a given race come from a distribution where
that race is well-represented, and that annotators consistently prefer
generated images of white people over those of Black people.
Related papers
- Classes Are Not Equal: An Empirical Study on Image Recognition Fairness [100.36114135663836]
We experimentally demonstrate that classes are not equal and the fairness issue is prevalent for image classification models across various datasets.
Our findings reveal that models tend to exhibit greater prediction biases for classes that are more challenging to recognize.
Data augmentation and representation learning algorithms improve overall performance by promoting fairness to some degree in image classification.
arXiv Detail & Related papers (2024-02-28T07:54:50Z) - Leveraging Diffusion Perturbations for Measuring Fairness in Computer
Vision [25.414154497482162]
We demonstrate that diffusion models can be leveraged to create such a dataset.
We benchmark several vision-language models on a multi-class occupation classification task.
We find that images generated with non-Caucasian labels have a significantly higher occupation misclassification rate than images generated with Caucasian labels.
arXiv Detail & Related papers (2023-11-25T19:40:13Z) - Improving Fairness using Vision-Language Driven Image Augmentation [60.428157003498995]
Fairness is crucial when training a deep-learning discriminative model, especially in the facial domain.
Models tend to correlate specific characteristics (such as age and skin color) with unrelated attributes (downstream tasks)
This paper proposes a method to mitigate these correlations to improve fairness.
arXiv Detail & Related papers (2023-11-02T19:51:10Z) - Addressing Racial Bias in Facial Emotion Recognition [1.4896509623302834]
This study focuses on analyzing racial bias by sub-sampling training sets with varied racial distributions.
Our findings indicate that smaller datasets with posed faces improve on both fairness and performance metrics as the simulations approach racial balance.
In larger datasets with greater facial variation, fairness metrics generally remain constant, suggesting that racial balance by itself is insufficient to achieve parity in test performance across different racial groups.
arXiv Detail & Related papers (2023-08-09T03:03:35Z) - The Impact of Racial Distribution in Training Data on Face Recognition
Bias: A Closer Look [0.0]
We study the effect of racial distribution in the training data on the performance of face recognition models.
We analyze these trained models using accuracy metrics, clustering metrics, UMAP projections, face quality, and decision thresholds.
arXiv Detail & Related papers (2022-11-26T07:03:24Z) - Decoupled Mixup for Generalized Visual Recognition [71.13734761715472]
We propose a novel "Decoupled-Mixup" method to train CNN models for visual recognition.
Our method decouples each image into discriminative and noise-prone regions, and then heterogeneously combines these regions to train CNN models.
Experiment results show the high generalization performance of our method on testing data that are composed of unseen contexts.
arXiv Detail & Related papers (2022-10-26T15:21:39Z) - Unravelling the Effect of Image Distortions for Biased Prediction of
Pre-trained Face Recognition Models [86.79402670904338]
We evaluate the performance of four state-of-the-art deep face recognition models in the presence of image distortions.
We have observed that image distortions have a relationship with the performance gap of the model across different subgroups.
arXiv Detail & Related papers (2021-08-14T16:49:05Z) - Fairness in Cardiac MR Image Analysis: An Investigation of Bias Due to
Data Imbalance in Deep Learning Based Segmentation [1.6386696247541932]
"Fairness" in AI refers to assessing algorithms for potential bias based on demographic characteristics such as race and gender.
Deep learning (DL) in cardiac MR segmentation has led to impressive results in recent years, but no work has yet investigated the fairness of such models.
We find statistically significant differences in Dice performance between different racial groups.
arXiv Detail & Related papers (2021-06-23T13:27:35Z) - One Label, One Billion Faces: Usage and Consistency of Racial Categories
in Computer Vision [75.82110684355979]
We study the racial system encoded by computer vision datasets supplying categorical race labels for face images.
We find that each dataset encodes a substantially unique racial system, despite nominally equivalent racial categories.
We find evidence that racial categories encode stereotypes, and exclude ethnic groups from categories on the basis of nonconformity to stereotypes.
arXiv Detail & Related papers (2021-02-03T22:50:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.