Sampling Strategies for Mitigating Bias in Face Synthesis Methods
- URL: http://arxiv.org/abs/2405.11320v1
- Date: Sat, 18 May 2024 15:30:14 GMT
- Title: Sampling Strategies for Mitigating Bias in Face Synthesis Methods
- Authors: Emmanouil Maragkoudakis, Symeon Papadopoulos, Iraklis Varlamis, Christos Diou,
- Abstract summary: This paper examines the bias introduced by the widely popular StyleGAN2 generative model trained on the Flickr Faces dataset HQ.
We focus on two protected attributes, gender and age, and reveal that biases arise in the distribution of randomly sampled images against very young and very old age groups, as well as against female faces.
- Score: 12.604667154855532
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Synthetically generated images can be used to create media content or to complement datasets for training image analysis models. Several methods have recently been proposed for the synthesis of high-fidelity face images; however, the potential biases introduced by such methods have not been sufficiently addressed. This paper examines the bias introduced by the widely popular StyleGAN2 generative model trained on the Flickr Faces HQ dataset and proposes two sampling strategies to balance the representation of selected attributes in the generated face images. We focus on two protected attributes, gender and age, and reveal that biases arise in the distribution of randomly sampled images against very young and very old age groups, as well as against female faces. These biases are also assessed for different image quality levels based on the GIQA score. To mitigate bias, we propose two alternative methods for sampling on selected lines or spheres of the latent space to increase the number of generated samples from the under-represented classes. The experimental results show a decrease in bias against underrepresented groups and a more uniform distribution of the protected features at different levels of image quality.
Related papers
- Conditional Distribution Modelling for Few-Shot Image Synthesis with Diffusion Models [29.821909424996015]
Few-shot image synthesis entails generating diverse and realistic images of novel categories using only a few example images.
We propose Conditional Distribution Modelling (CDM) -- a framework which effectively utilizes Diffusion models for few-shot image generation.
arXiv Detail & Related papers (2024-04-25T12:11:28Z) - Benchmarking the Fairness of Image Upsampling Methods [29.01986714656294]
We develop a set of metrics for performance and fairness of conditional generative models.
We benchmark their imbalances and diversity.
As part of the study, a subset of datasets replicates the racial distribution of common-scale face.
arXiv Detail & Related papers (2024-01-24T16:13:26Z) - Unbiased Image Synthesis via Manifold Guidance in Diffusion Models [9.531220208352252]
Diffusion Models often inadvertently favor certain data attributes, undermining the diversity of generated images.
We propose a plug-and-play method named Manifold Sampling Guidance, which is also the first unsupervised method to mitigate bias issue in DDPMs.
arXiv Detail & Related papers (2023-07-17T02:03:17Z) - FewGAN: Generating from the Joint Distribution of a Few Images [95.6635227371479]
We introduce FewGAN, a generative model for generating novel, high-quality and diverse images.
FewGAN is a hierarchical patch-GAN that applies quantization at the first coarse scale, followed by a pyramid of residual fully convolutional GANs at finer scales.
In an extensive set of experiments, it is shown that FewGAN outperforms baselines both quantitatively and qualitatively.
arXiv Detail & Related papers (2022-07-18T07:11:28Z) - Saliency Grafting: Innocuous Attribution-Guided Mixup with Calibrated
Label Mixing [104.630875328668]
Mixup scheme suggests mixing a pair of samples to create an augmented training sample.
We present a novel, yet simple Mixup-variant that captures the best of both worlds.
arXiv Detail & Related papers (2021-12-16T11:27:48Z) - Unravelling the Effect of Image Distortions for Biased Prediction of
Pre-trained Face Recognition Models [86.79402670904338]
We evaluate the performance of four state-of-the-art deep face recognition models in the presence of image distortions.
We have observed that image distortions have a relationship with the performance gap of the model across different subgroups.
arXiv Detail & Related papers (2021-08-14T16:49:05Z) - Jo-SRC: A Contrastive Approach for Combating Noisy Labels [58.867237220886885]
We propose a noise-robust approach named Jo-SRC (Joint Sample Selection and Model Regularization based on Consistency)
Specifically, we train the network in a contrastive learning manner. Predictions from two different views of each sample are used to estimate its "likelihood" of being clean or out-of-distribution.
arXiv Detail & Related papers (2021-03-24T07:26:07Z) - An Unsupervised Sampling Approach for Image-Sentence Matching Using
Document-Level Structural Information [64.66785523187845]
We focus on the problem of unsupervised image-sentence matching.
Existing research explores to utilize document-level structural information to sample positive and negative instances for model training.
We propose a new sampling strategy to select additional intra-document image-sentence pairs as positive or negative samples.
arXiv Detail & Related papers (2021-03-21T05:43:29Z) - Doubly Contrastive Deep Clustering [135.7001508427597]
We present a novel Doubly Contrastive Deep Clustering (DCDC) framework, which constructs contrastive loss over both sample and class views.
Specifically, for the sample view, we set the class distribution of the original sample and its augmented version as positive sample pairs.
For the class view, we build the positive and negative pairs from the sample distribution of the class.
In this way, two contrastive losses successfully constrain the clustering results of mini-batch samples in both sample and class level.
arXiv Detail & Related papers (2021-03-09T15:15:32Z) - Improving the Fairness of Deep Generative Models without Retraining [41.6580482370894]
Generative Adversarial Networks (GANs) advance face synthesis through learning the underlying distribution of observed data.
Despite the high-quality generated faces, some minority groups can be rarely generated from the trained models due to a biased image generation process.
We propose an interpretable baseline method to balance the output facial attributes without retraining.
arXiv Detail & Related papers (2020-12-09T03:20:41Z) - One-Shot Domain Adaptation For Face Generation [34.882820002799626]
We propose a framework capable of generating face images that fall into the same distribution as that of a given one-shot example.
We develop an iterative optimization scheme that rapidly adapts the weights of the model to shift the output's high-level distribution to the target's.
To generate images of the same distribution, we introduce a style-mixing technique that transfers the low-level statistics from the target to faces randomly generated with the model.
arXiv Detail & Related papers (2020-03-28T18:50:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.