Fair Sampling in Diffusion Models through Switching Mechanism
- URL: http://arxiv.org/abs/2401.03140v3
- Date: Thu, 1 Feb 2024 02:08:55 GMT
- Title: Fair Sampling in Diffusion Models through Switching Mechanism
- Authors: Yujin Choi, Jinseong Park, Hoki Kim, Jaewook Lee, Saeroom Park
- Abstract summary: We propose a fairness-aware sampling method called textitattribute switching mechanism for diffusion models.
We mathematically prove and experimentally demonstrate the effectiveness of the proposed method on two key aspects.
- Score: 4.990206466948269
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion models have shown their effectiveness in generation tasks by
well-approximating the underlying probability distribution. However, diffusion
models are known to suffer from an amplified inherent bias from the training
data in terms of fairness. While the sampling process of diffusion models can
be controlled by conditional guidance, previous works have attempted to find
empirical guidance to achieve quantitative fairness. To address this
limitation, we propose a fairness-aware sampling method called
\textit{attribute switching} mechanism for diffusion models. Without additional
training, the proposed sampling can obfuscate sensitive attributes in generated
data without relying on classifiers. We mathematically prove and experimentally
demonstrate the effectiveness of the proposed method on two key aspects: (i)
the generation of fair data and (ii) the preservation of the utility of the
generated data.
Related papers
- Fair Data Generation via Score-based Diffusion Model [9.734351986961613]
We propose a diffusion model-based framework, FADM: Fairness-Aware Diffusion with Meta-training.
It generates entirely new, fair synthetic data from biased datasets for use in any downstream tasks.
Experiments on real datasets demonstrate that FADM achieves better accuracy and optimal fairness in downstream tasks.
arXiv Detail & Related papers (2024-06-13T17:36:05Z) - Balanced Mixed-Type Tabular Data Synthesis with Diffusion Models [4.624729755957781]
We introduce a fair diffusion model designed to generate balanced data on sensitive attributes.
We present empirical evidence demonstrating that our method effectively mitigates the class imbalance in training data.
arXiv Detail & Related papers (2024-04-12T06:08:43Z) - Unveil Conditional Diffusion Models with Classifier-free Guidance: A Sharp Statistical Theory [87.00653989457834]
Conditional diffusion models serve as the foundation of modern image synthesis and find extensive application in fields like computational biology and reinforcement learning.
Despite the empirical success, theory of conditional diffusion models is largely missing.
This paper bridges the gap by presenting a sharp statistical theory of distribution estimation using conditional diffusion models.
arXiv Detail & Related papers (2024-03-18T17:08:24Z) - Theoretical Insights for Diffusion Guidance: A Case Study for Gaussian
Mixture Models [59.331993845831946]
Diffusion models benefit from instillation of task-specific information into the score function to steer the sample generation towards desired properties.
This paper provides the first theoretical study towards understanding the influence of guidance on diffusion models in the context of Gaussian mixture models.
arXiv Detail & Related papers (2024-03-03T23:15:48Z) - Data Attribution for Diffusion Models: Timestep-induced Bias in
Influence Estimation [58.20016784231991]
Diffusion models operate over a sequence of timesteps instead of instantaneous input-output relationships in previous contexts.
We present Diffusion-TracIn that incorporates this temporal dynamics and observe that samples' loss gradient norms are highly dependent on timestep.
We introduce Diffusion-ReTrac as a re-normalized adaptation that enables the retrieval of training samples more targeted to the test sample of interest.
arXiv Detail & Related papers (2024-01-17T07:58:18Z) - Fast Sampling via Discrete Non-Markov Diffusion Models [49.598085130313514]
We propose a discrete non-Markov diffusion model, which admits an accelerated reverse sampling for discrete data generation.
Our method significantly reduces the number of function evaluations (i.e., calls to the neural network), making the sampling process much faster.
arXiv Detail & Related papers (2023-12-14T18:14:11Z) - Guided Diffusion from Self-Supervised Diffusion Features [49.78673164423208]
Guidance serves as a key concept in diffusion models, yet its effectiveness is often limited by the need for extra data annotation or pretraining.
We propose a framework to extract guidance from, and specifically for, diffusion models.
arXiv Detail & Related papers (2023-12-14T11:19:11Z) - Towards Controllable Diffusion Models via Reward-Guided Exploration [15.857464051475294]
We propose a novel framework that guides the training-phase of diffusion models via reinforcement learning (RL)
RL enables calculating policy gradients via samples from a pay-off distribution proportional to exponential scaled rewards, rather than from policies themselves.
Experiments on 3D shape and molecule generation tasks show significant improvements over existing conditional diffusion models.
arXiv Detail & Related papers (2023-04-14T13:51:26Z) - Enhanced Controllability of Diffusion Models via Feature Disentanglement and Realism-Enhanced Sampling Methods [27.014858633903867]
We present a training framework for feature disentanglement of Diffusion Models (FDiff)
We propose two sampling methods that can boost the realism of our Diffusion Models and also enhance the controllability.
arXiv Detail & Related papers (2023-02-28T07:43:00Z) - Bi-Noising Diffusion: Towards Conditional Diffusion Models with
Generative Restoration Priors [64.24948495708337]
We introduce a new method that brings predicted samples to the training data manifold using a pretrained unconditional diffusion model.
We perform comprehensive experiments to demonstrate the effectiveness of our approach on super-resolution, colorization, turbulence removal, and image-deraining tasks.
arXiv Detail & Related papers (2022-12-14T17:26:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.