Fair Diffusion: Instructing Text-to-Image Generation Models on Fairness
- URL: http://arxiv.org/abs/2302.10893v3
- Date: Mon, 17 Jul 2023 12:13:46 GMT
- Title: Fair Diffusion: Instructing Text-to-Image Generation Models on Fairness
- Authors: Felix Friedrich, Manuel Brack, Lukas Struppek, Dominik Hintersdorf,
Patrick Schramowski, Sasha Luccioni, Kristian Kersting
- Abstract summary: We present a novel strategy, called Fair Diffusion, to attenuate biases after the deployment of generative text-to-image models.
Specifically, we demonstrate shifting a bias, based on human instructions, in any direction yielding arbitrarily new proportions for, e.g., identity groups.
This introduced control enables instructing generative image models on fairness, with no data filtering and additional training required.
- Score: 15.059419033330126
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative AI models have recently achieved astonishing results in quality
and are consequently employed in a fast-growing number of applications.
However, since they are highly data-driven, relying on billion-sized datasets
randomly scraped from the internet, they also suffer from degenerated and
biased human behavior, as we demonstrate. In fact, they may even reinforce such
biases. To not only uncover but also combat these undesired effects, we present
a novel strategy, called Fair Diffusion, to attenuate biases after the
deployment of generative text-to-image models. Specifically, we demonstrate
shifting a bias, based on human instructions, in any direction yielding
arbitrarily new proportions for, e.g., identity groups. As our empirical
evaluation demonstrates, this introduced control enables instructing generative
image models on fairness, with no data filtering and additional training
required.
Related papers
- Utilizing Adversarial Examples for Bias Mitigation and Accuracy Enhancement [3.0820287240219795]
We propose a novel approach to mitigate biases in computer vision models by utilizing counterfactual generation and fine-tuning.
Our approach leverages a curriculum learning framework combined with a fine-grained adversarial loss to fine-tune the model using adversarial examples.
We validate our approach through both qualitative and quantitative assessments, demonstrating improved bias mitigation and accuracy compared to existing methods.
arXiv Detail & Related papers (2024-04-18T00:41:32Z) - Would Deep Generative Models Amplify Bias in Future Models? [29.918422914275226]
We investigate the impact of deep generative models on potential social biases in upcoming computer vision models.
We conduct simulations by substituting original images in COCO and CC3M datasets with images generated through Stable Diffusion.
Contrary to expectations, our findings indicate that introducing generated images during training does not uniformly amplify bias.
arXiv Detail & Related papers (2024-04-04T06:58:39Z) - FairRAG: Fair Human Generation via Fair Retrieval Augmentation [27.069276012884398]
We introduce Fair Retrieval Augmented Generation (FairRAG), a novel framework that conditions pre-trained generative models on reference images retrieved from an external image database to improve fairness in human generation.
To enhance fairness, FairRAG applies simple-yet-effective debiasing strategies, providing images from diverse demographic groups during the generative process.
arXiv Detail & Related papers (2024-03-29T03:56:19Z) - Fast Model Debias with Machine Unlearning [54.32026474971696]
Deep neural networks might behave in a biased manner in many real-world scenarios.
Existing debiasing methods suffer from high costs in bias labeling or model re-training.
We propose a fast model debiasing framework (FMD) which offers an efficient approach to identify, evaluate and remove biases.
arXiv Detail & Related papers (2023-10-19T08:10:57Z) - Analyzing Bias in Diffusion-based Face Generation Models [75.80072686374564]
Diffusion models are increasingly popular in synthetic data generation and image editing applications.
We investigate the presence of bias in diffusion-based face generation models with respect to attributes such as gender, race, and age.
We examine how dataset size affects the attribute composition and perceptual quality of both diffusion and Generative Adversarial Network (GAN) based face generation models.
arXiv Detail & Related papers (2023-05-10T18:22:31Z) - Non-Invasive Fairness in Learning through the Lens of Data Drift [88.37640805363317]
We show how to improve the fairness of Machine Learning models without altering the data or the learning algorithm.
We use a simple but key insight: the divergence of trends between different populations, and, consecutively, between a learned model and minority populations, is analogous to data drift.
We explore two strategies (model-splitting and reweighing) to resolve this drift, aiming to improve the overall conformance of models to the underlying data.
arXiv Detail & Related papers (2023-03-30T17:30:42Z) - Debiasing Vision-Language Models via Biased Prompts [79.04467131711775]
We propose a general approach for debiasing vision-language foundation models by projecting out biased directions in the text embedding.
We show that debiasing only the text embedding with a calibrated projection matrix suffices to yield robust classifiers and fair generative models.
arXiv Detail & Related papers (2023-01-31T20:09:33Z) - Negative Data Augmentation [127.28042046152954]
We show that negative data augmentation samples provide information on the support of the data distribution.
We introduce a new GAN training objective where we use NDA as an additional source of synthetic data for the discriminator.
Empirically, models trained with our method achieve improved conditional/unconditional image generation along with improved anomaly detection capabilities.
arXiv Detail & Related papers (2021-02-09T20:28:35Z) - Improving the Fairness of Deep Generative Models without Retraining [41.6580482370894]
Generative Adversarial Networks (GANs) advance face synthesis through learning the underlying distribution of observed data.
Despite the high-quality generated faces, some minority groups can be rarely generated from the trained models due to a biased image generation process.
We propose an interpretable baseline method to balance the output facial attributes without retraining.
arXiv Detail & Related papers (2020-12-09T03:20:41Z) - Inclusive GAN: Improving Data and Minority Coverage in Generative Models [101.67587566218928]
We formalize the problem of minority inclusion as one of data coverage.
We then propose to improve data coverage by harmonizing adversarial training with reconstructive generation.
We develop an extension that allows explicit control over the minority subgroups that the model should ensure to include.
arXiv Detail & Related papers (2020-04-07T13:31:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.