Denoising Diffusion Gamma Models
- URL: http://arxiv.org/abs/2110.05948v1
- Date: Sun, 10 Oct 2021 10:46:31 GMT
- Title: Denoising Diffusion Gamma Models
- Authors: Eliya Nachmani, Robin San Roman, Lior Wolf
- Abstract summary: We introduce the Denoising Diffusion Gamma Model (DDGM) and show that noise from Gamma distribution provides improved results for image and speech generation.
Our approach preserves the ability to efficiently sample state in the training diffusion process while using Gamma noise.
- Score: 91.22679787578438
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative diffusion processes are an emerging and effective tool for image
and speech generation. In the existing methods, the underlying noise
distribution of the diffusion process is Gaussian noise. However, fitting
distributions with more degrees of freedom could improve the performance of
such generative models. In this work, we investigate other types of noise
distribution for the diffusion process. Specifically, we introduce the
Denoising Diffusion Gamma Model (DDGM) and show that noise from Gamma
distribution provides improved results for image and speech generation. Our
approach preserves the ability to efficiently sample state in the training
diffusion process while using Gamma noise.
Related papers
- Ultrasound Image Enhancement with the Variance of Diffusion Models [7.360352432782388]
Enhancing ultrasound images requires a delicate balance between contrast, resolution, and speckle preservation.
This paper introduces a novel approach that integrates adaptive beamforming with denoising diffusion-based variance imaging.
arXiv Detail & Related papers (2024-09-17T17:29:33Z) - Immiscible Diffusion: Accelerating Diffusion Training with Noise Assignment [56.609042046176555]
suboptimal noise-data mapping leads to slow training of diffusion models.
Drawing inspiration from the immiscibility phenomenon in physics, we propose Immiscible Diffusion.
Our approach is remarkably simple, requiring only one line of code to restrict the diffuse-able area for each image.
arXiv Detail & Related papers (2024-06-18T06:20:42Z) - Blue noise for diffusion models [50.99852321110366]
We introduce a novel and general class of diffusion models taking correlated noise within and across images into account.
Our framework allows introducing correlation across images within a single mini-batch to improve gradient flow.
We perform both qualitative and quantitative evaluations on a variety of datasets using our method.
arXiv Detail & Related papers (2024-02-07T14:59:25Z) - An Analysis of the Variance of Diffusion-based Speech Enhancement [15.736484513462973]
We show that the scale of the variance is a dominant parameter for speech enhancement performance.
We show that a larger variance increases the noise attenuation and allows for reducing the computational footprint.
arXiv Detail & Related papers (2024-02-01T17:46:19Z) - Diffusion Models With Learned Adaptive Noise [12.530583016267768]
We propose a learned diffusion process that applies noise at different rates across an image.
MuLAN sets a new state-of-the-art in density estimation on CIFAR-10 and ImageNet.
arXiv Detail & Related papers (2023-12-20T18:00:16Z) - Denoising Diffusion Bridge Models [54.87947768074036]
Diffusion models are powerful generative models that map noise to data using processes.
For many applications such as image editing, the model input comes from a distribution that is not random noise.
In our work, we propose Denoising Diffusion Bridge Models (DDBMs)
arXiv Detail & Related papers (2023-09-29T03:24:24Z) - Soft Mixture Denoising: Beyond the Expressive Bottleneck of Diffusion
Models [76.46246743508651]
We show that current diffusion models actually have an expressive bottleneck in backward denoising.
We introduce soft mixture denoising (SMD), an expressive and efficient model for backward denoising.
arXiv Detail & Related papers (2023-09-25T12:03:32Z) - SVNR: Spatially-variant Noise Removal with Denoising Diffusion [43.2405873681083]
We present a novel formulation of denoising diffusion that assumes a more realistic, spatially-variant noise model.
In experiments we demonstrate the advantages of our approach over a strong diffusion model baseline, as well as over a state-of-the-art single image denoising method.
arXiv Detail & Related papers (2023-06-28T09:32:00Z) - Non Gaussian Denoising Diffusion Models [91.22679787578438]
We show that noise from Gamma distribution provides improved results for image and speech generation.
We also show that using a mixture of Gaussian noise variables in the diffusion process improves the performance over a diffusion process that is based on a single distribution.
arXiv Detail & Related papers (2021-06-14T16:42:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.