Immiscible Diffusion: Accelerating Diffusion Training with Noise Assignment
- URL: http://arxiv.org/abs/2406.12303v2
- Date: Wed, 30 Oct 2024 23:30:45 GMT
- Title: Immiscible Diffusion: Accelerating Diffusion Training with Noise Assignment
- Authors: Yiheng Li, Heyang Jiang, Akio Kodaira, Masayoshi Tomizuka, Kurt Keutzer, Chenfeng Xu,
- Abstract summary: suboptimal noise-data mapping leads to slow training of diffusion models.
Drawing inspiration from the immiscibility phenomenon in physics, we propose Immiscible Diffusion.
Our approach is remarkably simple, requiring only one line of code to restrict the diffuse-able area for each image.
- Score: 56.609042046176555
- License:
- Abstract: In this paper, we point out that suboptimal noise-data mapping leads to slow training of diffusion models. During diffusion training, current methods diffuse each image across the entire noise space, resulting in a mixture of all images at every point in the noise layer. We emphasize that this random mixture of noise-data mapping complicates the optimization of the denoising function in diffusion models. Drawing inspiration from the immiscibility phenomenon in physics, we propose Immiscible Diffusion, a simple and effective method to improve the random mixture of noise-data mapping. In physics, miscibility can vary according to various intermolecular forces. Thus, immiscibility means that the mixing of molecular sources is distinguishable. Inspired by this concept, we propose an assignment-then-diffusion training strategy to achieve Immiscible Diffusion. As one example, prior to diffusing the image data into noise, we assign diffusion target noise for the image data by minimizing the total image-noise pair distance in a mini-batch. The assignment functions analogously to external forces to expel the diffuse-able areas of images, thus mitigating the inherent difficulties in diffusion training. Our approach is remarkably simple, requiring only one line of code to restrict the diffuse-able area for each image while preserving the Gaussian distribution of noise. In this way, each image is preferably projected to nearby noise. Experiments demonstrate that our method can achieve up to 3x faster training for unconditional Consistency Models on the CIFAR dataset, as well as for DDIM and Stable Diffusion on CelebA and ImageNet dataset, and in class-conditional training and fine-tuning. In addition, we conducted a thorough analysis that sheds light on how it improves diffusion training speed while improving fidelity. The code is available at https://yhli123.github.io/immiscible-diffusion
Related papers
- Diffusion Priors for Variational Likelihood Estimation and Image Denoising [10.548018200066858]
We propose adaptive likelihood estimation and MAP inference during the reverse diffusion process to tackle real-world noise.
Experiments and analyses on diverse real-world datasets demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2024-10-23T02:52:53Z) - NoiseDiffusion: Correcting Noise for Image Interpolation with Diffusion Models beyond Spherical Linear Interpolation [86.7260950382448]
We propose a novel approach to correct noise for image validity, NoiseDiffusion.
NoiseDiffusion performs within the noisy image space and injects raw images into these noisy counterparts to address the challenge of information loss.
arXiv Detail & Related papers (2024-03-13T12:32:25Z) - Blue noise for diffusion models [50.99852321110366]
We introduce a novel and general class of diffusion models taking correlated noise within and across images into account.
Our framework allows introducing correlation across images within a single mini-batch to improve gradient flow.
We perform both qualitative and quantitative evaluations on a variety of datasets using our method.
arXiv Detail & Related papers (2024-02-07T14:59:25Z) - Diffusion Models With Learned Adaptive Noise [12.530583016267768]
We propose a learned diffusion process that applies noise at different rates across an image.
MuLAN sets a new state-of-the-art in density estimation on CIFAR-10 and ImageNet.
arXiv Detail & Related papers (2023-12-20T18:00:16Z) - Progressive Deblurring of Diffusion Models for Coarse-to-Fine Image
Synthesis [39.671396431940224]
diffusion models have shown remarkable results in image synthesis by gradually removing noise and amplifying signals.
We propose a novel generative process that synthesizes images in a coarse-to-fine manner.
Experiments show that the proposed model outperforms the previous method in FID on LSUN bedroom and church datasets.
arXiv Detail & Related papers (2022-07-16T15:00:21Z) - Diffusion-GAN: Training GANs with Diffusion [135.24433011977874]
Generative adversarial networks (GANs) are challenging to train stably.
We propose Diffusion-GAN, a novel GAN framework that leverages a forward diffusion chain to generate instance noise.
We show that Diffusion-GAN can produce more realistic images with higher stability and data efficiency than state-of-the-art GANs.
arXiv Detail & Related papers (2022-06-05T20:45:01Z) - Denoising Diffusion Gamma Models [91.22679787578438]
We introduce the Denoising Diffusion Gamma Model (DDGM) and show that noise from Gamma distribution provides improved results for image and speech generation.
Our approach preserves the ability to efficiently sample state in the training diffusion process while using Gamma noise.
arXiv Detail & Related papers (2021-10-10T10:46:31Z) - Non Gaussian Denoising Diffusion Models [91.22679787578438]
We show that noise from Gamma distribution provides improved results for image and speech generation.
We also show that using a mixture of Gaussian noise variables in the diffusion process improves the performance over a diffusion process that is based on a single distribution.
arXiv Detail & Related papers (2021-06-14T16:42:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.