Diffusion-GAN: Training GANs with Diffusion
- URL: http://arxiv.org/abs/2206.02262v4
- Date: Fri, 25 Aug 2023 16:33:42 GMT
- Title: Diffusion-GAN: Training GANs with Diffusion
- Authors: Zhendong Wang, Huangjie Zheng, Pengcheng He, Weizhu Chen, Mingyuan
Zhou
- Abstract summary: Generative adversarial networks (GANs) are challenging to train stably.
We propose Diffusion-GAN, a novel GAN framework that leverages a forward diffusion chain to generate instance noise.
We show that Diffusion-GAN can produce more realistic images with higher stability and data efficiency than state-of-the-art GANs.
- Score: 135.24433011977874
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative adversarial networks (GANs) are challenging to train stably, and a
promising remedy of injecting instance noise into the discriminator input has
not been very effective in practice. In this paper, we propose Diffusion-GAN, a
novel GAN framework that leverages a forward diffusion chain to generate
Gaussian-mixture distributed instance noise. Diffusion-GAN consists of three
components, including an adaptive diffusion process, a diffusion
timestep-dependent discriminator, and a generator. Both the observed and
generated data are diffused by the same adaptive diffusion process. At each
diffusion timestep, there is a different noise-to-data ratio and the
timestep-dependent discriminator learns to distinguish the diffused real data
from the diffused generated data. The generator learns from the discriminator's
feedback by backpropagating through the forward diffusion chain, whose length
is adaptively adjusted to balance the noise and data levels. We theoretically
show that the discriminator's timestep-dependent strategy gives consistent and
helpful guidance to the generator, enabling it to match the true data
distribution. We demonstrate the advantages of Diffusion-GAN over strong GAN
baselines on various datasets, showing that it can produce more realistic
images with higher stability and data efficiency than state-of-the-art GANs.
Related papers
- Immiscible Diffusion: Accelerating Diffusion Training with Noise Assignment [56.609042046176555]
Current methods diffuse each image across the entire noise space, resulting in a mixture of all images at every point in the noise layer.
We propose Immiscible Diffusion, a simple and effective method to improve the random mixture of noise-data mapping.
Our approach is remarkably simple, requiring only one line of code to restrict the diffuse-able area for each image.
arXiv Detail & Related papers (2024-06-18T06:20:42Z) - Intention-aware Denoising Diffusion Model for Trajectory Prediction [14.524496560759555]
Trajectory prediction is an essential component in autonomous driving, particularly for collision avoidance systems.
We propose utilizing the diffusion model to generate the distribution of future trajectories.
We propose an Intention-aware denoising Diffusion Model (IDM)
Our methods achieve state-of-the-art results, with an FDE of 13.83 pixels on the SDD dataset and 0.36 meters on the ETH/UCY dataset.
arXiv Detail & Related papers (2024-03-14T09:05:25Z) - Diffusion-TS: Interpretable Diffusion for General Time Series Generation [6.639630994040322]
Diffusion-TS is a novel diffusion-based framework that generates time series samples of high quality.
We train the model to directly reconstruct the sample instead of the noise in each diffusion step, combining a Fourier-based loss term.
Results show that Diffusion-TS achieves the state-of-the-art results on various realistic analyses of time series.
arXiv Detail & Related papers (2024-03-04T05:39:23Z) - Theoretical Insights for Diffusion Guidance: A Case Study for Gaussian
Mixture Models [59.331993845831946]
Diffusion models benefit from instillation of task-specific information into the score function to steer the sample generation towards desired properties.
This paper provides the first theoretical study towards understanding the influence of guidance on diffusion models in the context of Gaussian mixture models.
arXiv Detail & Related papers (2024-03-03T23:15:48Z) - Improving and Unifying Discrete&Continuous-time Discrete Denoising
Diffusion [41.03548068279262]
We present a series of mathematical simplifications of the variational lower bound that enable more accurate and easy-to-optimize training for discrete diffusion.
We derive a simple formulation for backward denoising that enables exact and accelerated sampling, and importantly, an elegant unification of discrete-time and continuous-time discrete diffusion.
arXiv Detail & Related papers (2024-02-06T04:42:36Z) - Guided Diffusion from Self-Supervised Diffusion Features [49.78673164423208]
Guidance serves as a key concept in diffusion models, yet its effectiveness is often limited by the need for extra data annotation or pretraining.
We propose a framework to extract guidance from, and specifically for, diffusion models.
arXiv Detail & Related papers (2023-12-14T11:19:11Z) - Where to Diffuse, How to Diffuse, and How to Get Back: Automated
Learning for Multivariate Diffusions [22.04182099405728]
Diffusion-based generative models (DBGMs) perturb data to a target noise distribution and reverse this inference diffusion process to generate samples.
We show how to maximize a lower-bound on the likelihood for any number of auxiliary variables.
We then demonstrate how to parameterize the diffusion for a specified target noise distribution.
arXiv Detail & Related papers (2023-02-14T18:57:04Z) - Truncated Diffusion Probabilistic Models and Diffusion-based Adversarial
Auto-Encoders [137.1060633388405]
Diffusion-based generative models learn how to generate the data by inferring a reverse diffusion chain.
We propose a faster and cheaper approach that adds noise not until the data become pure random noise.
We show that the proposed model can be cast as an adversarial auto-encoder empowered by both the diffusion process and a learnable implicit prior.
arXiv Detail & Related papers (2022-02-19T20:18:49Z) - Non Gaussian Denoising Diffusion Models [91.22679787578438]
We show that noise from Gamma distribution provides improved results for image and speech generation.
We also show that using a mixture of Gaussian noise variables in the diffusion process improves the performance over a diffusion process that is based on a single distribution.
arXiv Detail & Related papers (2021-06-14T16:42:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.