Improving the Fairness of Deep Generative Models without Retraining
- URL: http://arxiv.org/abs/2012.04842v2
- Date: Mon, 29 Mar 2021 08:55:12 GMT
- Title: Improving the Fairness of Deep Generative Models without Retraining
- Authors: Shuhan Tan, Yujun Shen, Bolei Zhou
- Abstract summary: Generative Adversarial Networks (GANs) advance face synthesis through learning the underlying distribution of observed data.
Despite the high-quality generated faces, some minority groups can be rarely generated from the trained models due to a biased image generation process.
We propose an interpretable baseline method to balance the output facial attributes without retraining.
- Score: 41.6580482370894
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative Adversarial Networks (GANs) advance face synthesis through
learning the underlying distribution of observed data. Despite the high-quality
generated faces, some minority groups can be rarely generated from the trained
models due to a biased image generation process. To study the issue, we first
conduct an empirical study on a pre-trained face synthesis model. We observe
that after training the GAN model not only carries the biases in the training
data but also amplifies them to some degree in the image generation process. To
further improve the fairness of image generation, we propose an interpretable
baseline method to balance the output facial attributes without retraining. The
proposed method shifts the interpretable semantic distribution in the latent
space for a more balanced image generation while preserving the sample
diversity. Besides producing more balanced data regarding a particular
attribute (e.g., race, gender, etc.), our method is generalizable to handle
more than one attribute at a time and synthesize samples of fine-grained
subgroups. We further show the positive applicability of the balanced data
sampled from GANs to quantify the biases in other face recognition systems,
like commercial face attribute classifiers and face super-resolution
algorithms.
Related papers
- Sampling Strategies for Mitigating Bias in Face Synthesis Methods [12.604667154855532]
This paper examines the bias introduced by the widely popular StyleGAN2 generative model trained on the Flickr Faces dataset HQ.
We focus on two protected attributes, gender and age, and reveal that biases arise in the distribution of randomly sampled images against very young and very old age groups, as well as against female faces.
arXiv Detail & Related papers (2024-05-18T15:30:14Z) - Training Class-Imbalanced Diffusion Model Via Overlap Optimization [55.96820607533968]
Diffusion models trained on real-world datasets often yield inferior fidelity for tail classes.
Deep generative models, including diffusion models, are biased towards classes with abundant training images.
We propose a method based on contrastive learning to minimize the overlap between distributions of synthetic images for different classes.
arXiv Detail & Related papers (2024-02-16T16:47:21Z) - Gaussian Harmony: Attaining Fairness in Diffusion-based Face Generation
Models [31.688873613213392]
Diffusion models amplify the bias in the generation process, leading to an imbalance in distribution of sensitive attributes such as age, gender and race.
We mitigate the bias by localizing the means of the facial attributes in the latent space of the diffusion model using Gaussian mixture models (GMM)
Our results demonstrate that our approach leads to a more fair data generation in terms of representational fairness while preserving the quality of generated samples.
arXiv Detail & Related papers (2023-12-21T20:06:15Z) - Analyzing Bias in Diffusion-based Face Generation Models [75.80072686374564]
Diffusion models are increasingly popular in synthetic data generation and image editing applications.
We investigate the presence of bias in diffusion-based face generation models with respect to attributes such as gender, race, and age.
We examine how dataset size affects the attribute composition and perceptual quality of both diffusion and Generative Adversarial Network (GAN) based face generation models.
arXiv Detail & Related papers (2023-05-10T18:22:31Z) - Class-Balancing Diffusion Models [57.38599989220613]
Class-Balancing Diffusion Models (CBDM) are trained with a distribution adjustment regularizer as a solution.
Our method benchmarked the generation results on CIFAR100/CIFAR100LT dataset and shows outstanding performance on the downstream recognition task.
arXiv Detail & Related papers (2023-04-30T20:00:14Z) - Fair Diffusion: Instructing Text-to-Image Generation Models on Fairness [15.059419033330126]
We present a novel strategy, called Fair Diffusion, to attenuate biases after the deployment of generative text-to-image models.
Specifically, we demonstrate shifting a bias, based on human instructions, in any direction yielding arbitrarily new proportions for, e.g., identity groups.
This introduced control enables instructing generative image models on fairness, with no data filtering and additional training required.
arXiv Detail & Related papers (2023-02-07T18:25:28Z) - Unravelling the Effect of Image Distortions for Biased Prediction of
Pre-trained Face Recognition Models [86.79402670904338]
We evaluate the performance of four state-of-the-art deep face recognition models in the presence of image distortions.
We have observed that image distortions have a relationship with the performance gap of the model across different subgroups.
arXiv Detail & Related papers (2021-08-14T16:49:05Z) - Pixel-based Facial Expression Synthesis [1.7056768055368383]
We propose a pixel-based facial expression synthesis method in which each output pixel observes only one input pixel.
The proposed model is two orders of magnitude smaller which makes it suitable for deployment on resource-constrained devices.
arXiv Detail & Related papers (2020-10-27T16:00:45Z) - One-Shot Domain Adaptation For Face Generation [34.882820002799626]
We propose a framework capable of generating face images that fall into the same distribution as that of a given one-shot example.
We develop an iterative optimization scheme that rapidly adapts the weights of the model to shift the output's high-level distribution to the target's.
To generate images of the same distribution, we introduce a style-mixing technique that transfers the low-level statistics from the target to faces randomly generated with the model.
arXiv Detail & Related papers (2020-03-28T18:50:13Z) - High-Fidelity Synthesis with Disentangled Representation [60.19657080953252]
We propose an Information-Distillation Generative Adrial Network (ID-GAN) for disentanglement learning and high-fidelity synthesis.
Our method learns disentangled representation using VAE-based models, and distills the learned representation with an additional nuisance variable to the separate GAN-based generator for high-fidelity synthesis.
Despite the simplicity, we show that the proposed method is highly effective, achieving comparable image generation quality to the state-of-the-art methods using the disentangled representation.
arXiv Detail & Related papers (2020-01-13T14:39:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.