Come-Closer-Diffuse-Faster: Accelerating Conditional Diffusion Models
for Inverse Problems through Stochastic Contraction
- URL: http://arxiv.org/abs/2112.05146v1
- Date: Thu, 9 Dec 2021 04:28:41 GMT
- Title: Come-Closer-Diffuse-Faster: Accelerating Conditional Diffusion Models
for Inverse Problems through Stochastic Contraction
- Authors: Hyungjin Chung, Byeongsu Sim, Jong Chul Ye
- Abstract summary: Diffusion models have a critical downside - they are inherently slow to sample from, needing few thousand steps of iteration to generate images from pure Gaussian noise.
We show that starting from Gaussian noise is unnecessary. Instead, starting from a single forward diffusion with better initialization significantly reduces the number of sampling steps in the reverse conditional diffusion.
New sampling strategy, dubbed ComeCloser-DiffuseFaster (CCDF), also reveals a new insight on how the existing feedforward neural network approaches for inverse problems can be synergistically combined with the diffusion models.
- Score: 31.61199061999173
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Diffusion models have recently attained significant interest within the
community owing to their strong performance as generative models. Furthermore,
its application to inverse problems have demonstrated state-of-the-art
performance. Unfortunately, diffusion models have a critical downside - they
are inherently slow to sample from, needing few thousand steps of iteration to
generate images from pure Gaussian noise. In this work, we show that starting
from Gaussian noise is unnecessary. Instead, starting from a single forward
diffusion with better initialization significantly reduces the number of
sampling steps in the reverse conditional diffusion. This phenomenon is
formally explained by the contraction theory of the stochastic difference
equations like our conditional diffusion strategy - the alternating
applications of reverse diffusion followed by a non-expansive data consistency
step. The new sampling strategy, dubbed Come-Closer-Diffuse-Faster (CCDF), also
reveals a new insight on how the existing feed-forward neural network
approaches for inverse problems can be synergistically combined with the
diffusion models. Experimental results with super-resolution, image inpainting,
and compressed sensing MRI demonstrate that our method can achieve
state-of-the-art reconstruction performance at significantly reduced sampling
steps.
Related papers
- Resfusion: Denoising Diffusion Probabilistic Models for Image Restoration Based on Prior Residual Noise [34.65659277870287]
Research on denoising diffusion models has expanded its application to the field of image restoration.
We propose Resfusion, a framework that incorporates the residual term into the diffusion forward process.
We show that Resfusion exhibits competitive performance on ISTD dataset, LOL dataset and Raindrop dataset with only five sampling steps.
arXiv Detail & Related papers (2023-11-25T02:09:38Z) - Soft Mixture Denoising: Beyond the Expressive Bottleneck of Diffusion
Models [76.46246743508651]
We show that current diffusion models actually have an expressive bottleneck in backward denoising.
We introduce soft mixture denoising (SMD), an expressive and efficient model for backward denoising.
arXiv Detail & Related papers (2023-09-25T12:03:32Z) - Fast Diffusion EM: a diffusion model for blind inverse problems with
application to deconvolution [0.0]
Current methods assume the degradation to be known and provide impressive results in terms of restoration and diversity.
In this work, we leverage the efficiency of those models to jointly estimate the restored image and unknown parameters of the kernel model.
Our method alternates between approximating the expected log-likelihood of the problem using samples drawn from a diffusion model and a step to estimate unknown model parameters.
arXiv Detail & Related papers (2023-09-01T06:47:13Z) - Low-Light Image Enhancement with Wavelet-based Diffusion Models [50.632343822790006]
Diffusion models have achieved promising results in image restoration tasks, yet suffer from time-consuming, excessive computational resource consumption, and unstable restoration.
We propose a robust and efficient Diffusion-based Low-Light image enhancement approach, dubbed DiffLL.
arXiv Detail & Related papers (2023-06-01T03:08:28Z) - A Variational Perspective on Solving Inverse Problems with Diffusion
Models [101.831766524264]
Inverse tasks can be formulated as inferring a posterior distribution over data.
This is however challenging in diffusion models since the nonlinear and iterative nature of the diffusion process renders the posterior intractable.
We propose a variational approach that by design seeks to approximate the true posterior distribution.
arXiv Detail & Related papers (2023-05-07T23:00:47Z) - Reflected Diffusion Models [93.26107023470979]
We present Reflected Diffusion Models, which reverse a reflected differential equation evolving on the support of the data.
Our approach learns the score function through a generalized score matching loss and extends key components of standard diffusion models.
arXiv Detail & Related papers (2023-04-10T17:54:38Z) - Diffusion Models Generate Images Like Painters: an Analytical Theory of Outline First, Details Later [1.8416014644193066]
We observe that the reverse diffusion process that underlies image generation has the following properties.
Individual trajectories tend to be low-dimensional and resemble 2D rotations'
We find that this solution accurately describes the initial phase of image generation for pretrained models.
arXiv Detail & Related papers (2023-03-04T20:08:57Z) - ShiftDDPMs: Exploring Conditional Diffusion Models by Shifting Diffusion
Trajectories [144.03939123870416]
We propose a novel conditional diffusion model by introducing conditions into the forward process.
We use extra latent space to allocate an exclusive diffusion trajectory for each condition based on some shifting rules.
We formulate our method, which we call textbfShiftDDPMs, and provide a unified point of view on existing related methods.
arXiv Detail & Related papers (2023-02-05T12:48:21Z) - Diffusion Posterior Sampling for General Noisy Inverse Problems [50.873313752797124]
We extend diffusion solvers to handle noisy (non)linear inverse problems via approximation of the posterior sampling.
Our method demonstrates that diffusion models can incorporate various measurement noise statistics.
arXiv Detail & Related papers (2022-09-29T11:12:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.