Diffusion-GAN: Training GANs with Diffusion
- URL: http://arxiv.org/abs/2206.02262v4
- Date: Fri, 25 Aug 2023 16:33:42 GMT
- Title: Diffusion-GAN: Training GANs with Diffusion
- Authors: Zhendong Wang, Huangjie Zheng, Pengcheng He, Weizhu Chen, Mingyuan
Zhou
- Abstract summary: Generative adversarial networks (GANs) are challenging to train stably.
We propose Diffusion-GAN, a novel GAN framework that leverages a forward diffusion chain to generate instance noise.
We show that Diffusion-GAN can produce more realistic images with higher stability and data efficiency than state-of-the-art GANs.
- Score: 135.24433011977874
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative adversarial networks (GANs) are challenging to train stably, and a
promising remedy of injecting instance noise into the discriminator input has
not been very effective in practice. In this paper, we propose Diffusion-GAN, a
novel GAN framework that leverages a forward diffusion chain to generate
Gaussian-mixture distributed instance noise. Diffusion-GAN consists of three
components, including an adaptive diffusion process, a diffusion
timestep-dependent discriminator, and a generator. Both the observed and
generated data are diffused by the same adaptive diffusion process. At each
diffusion timestep, there is a different noise-to-data ratio and the
timestep-dependent discriminator learns to distinguish the diffused real data
from the diffused generated data. The generator learns from the discriminator's
feedback by backpropagating through the forward diffusion chain, whose length
is adaptively adjusted to balance the noise and data levels. We theoretically
show that the discriminator's timestep-dependent strategy gives consistent and
helpful guidance to the generator, enabling it to match the true data
distribution. We demonstrate the advantages of Diffusion-GAN over strong GAN
baselines on various datasets, showing that it can produce more realistic
images with higher stability and data efficiency than state-of-the-art GANs.
Related papers
- Rectified Diffusion Guidance for Conditional Generation [62.00207951161297]
We revisit the theory behind CFG and rigorously confirm that the improper configuration of the combination coefficients (i.e., the widely used summing-to-one version) brings about expectation shift of the generative distribution.
We propose ReCFG with a relaxation on the guidance coefficients such that denoising with ReCFG strictly aligns with the diffusion theory.
That way the rectified coefficients can be readily pre-computed via traversing the observed data, leaving the sampling speed barely affected.
arXiv Detail & Related papers (2024-10-24T13:41:32Z) - DDIL: Improved Diffusion Distillation With Imitation Learning [57.3467234269487]
Diffusion models excel at generative modeling (e.g., text-to-image) but sampling requires multiple denoising network passes.
Progressive distillation or consistency distillation have shown promise by reducing the number of passes.
We show that DDIL consistency improves on baseline algorithms of progressive distillation (PD), Latent consistency models (LCM) and Distribution Matching Distillation (DMD2)
arXiv Detail & Related papers (2024-10-15T18:21:47Z) - Immiscible Diffusion: Accelerating Diffusion Training with Noise Assignment [56.609042046176555]
suboptimal noise-data mapping leads to slow training of diffusion models.
Drawing inspiration from the immiscibility phenomenon in physics, we propose Immiscible Diffusion.
Our approach is remarkably simple, requiring only one line of code to restrict the diffuse-able area for each image.
arXiv Detail & Related papers (2024-06-18T06:20:42Z) - Unified Discrete Diffusion for Categorical Data [37.56355078250024]
We present a series of mathematical simplifications of the variational lower bound that enable more accurate and easy-to-optimize training for discrete diffusion.
We derive a simple formulation for backward denoising that enables exact and accelerated sampling, and importantly, an elegant unification of discrete-time and continuous-time discrete diffusion.
arXiv Detail & Related papers (2024-02-06T04:42:36Z) - Guided Diffusion from Self-Supervised Diffusion Features [49.78673164423208]
Guidance serves as a key concept in diffusion models, yet its effectiveness is often limited by the need for extra data annotation or pretraining.
We propose a framework to extract guidance from, and specifically for, diffusion models.
arXiv Detail & Related papers (2023-12-14T11:19:11Z) - Where to Diffuse, How to Diffuse, and How to Get Back: Automated
Learning for Multivariate Diffusions [22.04182099405728]
Diffusion-based generative models (DBGMs) perturb data to a target noise distribution and reverse this inference diffusion process to generate samples.
We show how to maximize a lower-bound on the likelihood for any number of auxiliary variables.
We then demonstrate how to parameterize the diffusion for a specified target noise distribution.
arXiv Detail & Related papers (2023-02-14T18:57:04Z) - Truncated Diffusion Probabilistic Models and Diffusion-based Adversarial
Auto-Encoders [137.1060633388405]
Diffusion-based generative models learn how to generate the data by inferring a reverse diffusion chain.
We propose a faster and cheaper approach that adds noise not until the data become pure random noise.
We show that the proposed model can be cast as an adversarial auto-encoder empowered by both the diffusion process and a learnable implicit prior.
arXiv Detail & Related papers (2022-02-19T20:18:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.