Finetuning Text-to-Image Diffusion Models for Fairness
- URL: http://arxiv.org/abs/2311.07604v2
- Date: Fri, 15 Mar 2024 08:42:39 GMT
- Title: Finetuning Text-to-Image Diffusion Models for Fairness
- Authors: Xudong Shen, Chao Du, Tianyu Pang, Min Lin, Yongkang Wong, Mohan Kankanhalli,
- Abstract summary: We frame fairness as a distributional alignment problem.
Empirically, our method markedly reduces gender, racial, and their intersectional biases for occupational prompts.
Our method supports diverse perspectives of fairness beyond absolute equality.
- Score: 43.80733100304361
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rapid adoption of text-to-image diffusion models in society underscores an urgent need to address their biases. Without interventions, these biases could propagate a skewed worldview and restrict opportunities for minority groups. In this work, we frame fairness as a distributional alignment problem. Our solution consists of two main technical contributions: (1) a distributional alignment loss that steers specific characteristics of the generated images towards a user-defined target distribution, and (2) adjusted direct finetuning of diffusion model's sampling process (adjusted DFT), which leverages an adjusted gradient to directly optimize losses defined on the generated images. Empirically, our method markedly reduces gender, racial, and their intersectional biases for occupational prompts. Gender bias is significantly reduced even when finetuning just five soft tokens. Crucially, our method supports diverse perspectives of fairness beyond absolute equality, which is demonstrated by controlling age to a $75\%$ young and $25\%$ old distribution while simultaneously debiasing gender and race. Finally, our method is scalable: it can debias multiple concepts at once by simply including these prompts in the finetuning data. We share code and various fair diffusion model adaptors at https://sail-sg.github.io/finetune-fair-diffusion/.
Related papers
- FairImagen: Post-Processing for Bias Mitigation in Text-to-Image Models [10.857020427374506]
We introduce FairImagen, a post-hoc debiasing framework that operates on prompt embeddings to mitigate societal biases.<n>Our framework outperforms existing post-hoc methods and offers a simple, scalable, and model-agnostic solution for equitable text-to-image generation.
arXiv Detail & Related papers (2025-10-24T11:47:15Z) - Prompting Away Stereotypes? Evaluating Bias in Text-to-Image Models for Occupations [9.58968557546246]
We frame representational societal bias assessment as an image curation and evaluation task.<n>Using five state-of-the-art models, we compare neutral baseline prompts against fairness-aware controlled prompts.<n>Results show that prompting can substantially shift demographic representations, but with highly model-specific effects.
arXiv Detail & Related papers (2025-08-31T13:46:16Z) - MIST: Mitigating Intersectional Bias with Disentangled Cross-Attention Editing in Text-to-Image Diffusion Models [3.3454373538792552]
We introduce a method that addresses intersectional bias in diffusion-based text-to-image models by modifying cross-attention maps in a disentangled manner.
Our approach utilizes a pre-trained Stable Diffusion model, eliminates the need for an additional set of reference images, and preserves the original quality for unaltered concepts.
arXiv Detail & Related papers (2024-03-28T17:54:38Z) - Distributionally Generative Augmentation for Fair Facial Attribute Classification [69.97710556164698]
Facial Attribute Classification (FAC) holds substantial promise in widespread applications.
FAC models trained by traditional methodologies can be unfair by exhibiting accuracy inconsistencies across varied data subpopulations.
This work proposes a novel, generation-based two-stage framework to train a fair FAC model on biased data without additional annotation.
arXiv Detail & Related papers (2024-03-11T10:50:53Z) - Debiasing Text-to-Image Diffusion Models [84.46750441518697]
Learning-based Text-to-Image (TTI) models have revolutionized the way visual content is generated in various domains.
Recent research has shown that nonnegligible social bias exists in current state-of-the-art TTI systems.
arXiv Detail & Related papers (2024-02-22T14:33:23Z) - Unbiased Image Synthesis via Manifold Guidance in Diffusion Models [9.531220208352252]
Diffusion Models often inadvertently favor certain data attributes, undermining the diversity of generated images.
We propose a plug-and-play method named Manifold Sampling Guidance, which is also the first unsupervised method to mitigate bias issue in DDPMs.
arXiv Detail & Related papers (2023-07-17T02:03:17Z) - Diffusion Visual Counterfactual Explanations [51.077318228247925]
Visual Counterfactual Explanations (VCEs) are an important tool to understand the decisions of an image.
Current approaches for the generation of VCEs are restricted to adversarially robust models and often contain non-realistic artefacts.
In this paper, we overcome this by generating Visual Diffusion Counterfactual Explanations (DVCEs) for arbitrary ImageNet classifiers.
arXiv Detail & Related papers (2022-10-21T09:35:47Z) - A Differentiable Distance Approximation for Fairer Image Classification [31.471917430653626]
We propose a differentiable approximation of the variance of demographics, a metric that can be used to measure the bias, or unfairness, in an AI model.
Our approximation can be optimised alongside the regular training objective which eliminates the need for any extra models during training.
We demonstrate that our approach improves the fairness of AI models in varied task and dataset scenarios.
arXiv Detail & Related papers (2022-10-09T23:02:18Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - Domain Adaptation meets Individual Fairness. And they get along [48.95808607591299]
We show that algorithmic fairness interventions can help machine learning models overcome distribution shifts.
In particular, we show that enforcing suitable notions of individual fairness (IF) can improve the out-of-distribution accuracy of ML models.
arXiv Detail & Related papers (2022-05-01T16:19:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.