Compensation Sampling for Improved Convergence in Diffusion Models
- URL: http://arxiv.org/abs/2312.06285v1
- Date: Mon, 11 Dec 2023 10:39:01 GMT
- Title: Compensation Sampling for Improved Convergence in Diffusion Models
- Authors: Hui Lu, Albert ali Salah, Ronald Poppe
- Abstract summary: Diffusion models achieve remarkable quality in image generation, but at a cost.
Iterative denoising requires many time steps to produce high fidelity images.
We argue that the denoising process is crucially limited by an accumulation of the reconstruction error due to an initial inaccurate reconstruction of the target data.
- Score: 12.311434647047427
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion models achieve remarkable quality in image generation, but at a
cost. Iterative denoising requires many time steps to produce high fidelity
images. We argue that the denoising process is crucially limited by an
accumulation of the reconstruction error due to an initial inaccurate
reconstruction of the target data. This leads to lower quality outputs, and
slower convergence. To address this issue, we propose compensation sampling to
guide the generation towards the target domain. We introduce a compensation
term, implemented as a U-Net, which adds negligible computation overhead during
training and, optionally, inference. Our approach is flexible and we
demonstrate its application in unconditional generation, face inpainting, and
face de-occlusion using benchmark datasets CIFAR-10, CelebA, CelebA-HQ,
FFHQ-256, and FSG. Our approach consistently yields state-of-the-art results in
terms of image quality, while accelerating the denoising process to converge
during training by up to an order of magnitude.
Related papers
- Fast constrained sampling in pre-trained diffusion models [77.21486516041391]
Diffusion models have dominated the field of large, generative image models.
We propose an algorithm for fast-constrained sampling in large pre-trained diffusion models.
arXiv Detail & Related papers (2024-10-24T14:52:38Z) - CoDi: Conditional Diffusion Distillation for Higher-Fidelity and Faster
Image Generation [49.3016007471979]
Large generative diffusion models have revolutionized text-to-image generation and offer immense potential for conditional generation tasks.
However, their widespread adoption is hindered by the high computational cost, which limits their real-time application.
We introduce a novel method dubbed CoDi, that adapts a pre-trained latent diffusion model to accept additional image conditioning inputs.
arXiv Detail & Related papers (2023-10-02T17:59:18Z) - Reconstruct-and-Generate Diffusion Model for Detail-Preserving Image
Denoising [16.43285056788183]
We propose a novel approach called the Reconstruct-and-Generate Diffusion Model (RnG)
Our method leverages a reconstructive denoising network to recover the majority of the underlying clean signal.
It employs a diffusion algorithm to generate residual high-frequency details, thereby enhancing visual quality.
arXiv Detail & Related papers (2023-09-19T16:01:20Z) - Steerable Conditional Diffusion for Out-of-Distribution Adaptation in Medical Image Reconstruction [75.91471250967703]
We introduce a novel sampling framework called Steerable Conditional Diffusion.
This framework adapts the diffusion model, concurrently with image reconstruction, based solely on the information provided by the available measurement.
We achieve substantial enhancements in out-of-distribution performance across diverse imaging modalities.
arXiv Detail & Related papers (2023-08-28T08:47:06Z) - ACDMSR: Accelerated Conditional Diffusion Models for Single Image
Super-Resolution [84.73658185158222]
We propose a diffusion model-based super-resolution method called ACDMSR.
Our method adapts the standard diffusion model to perform super-resolution through a deterministic iterative denoising process.
Our approach generates more visually realistic counterparts for low-resolution images, emphasizing its effectiveness in practical scenarios.
arXiv Detail & Related papers (2023-07-03T06:49:04Z) - Low-Light Image Enhancement with Wavelet-based Diffusion Models [50.632343822790006]
Diffusion models have achieved promising results in image restoration tasks, yet suffer from time-consuming, excessive computational resource consumption, and unstable restoration.
We propose a robust and efficient Diffusion-based Low-Light image enhancement approach, dubbed DiffLL.
arXiv Detail & Related papers (2023-06-01T03:08:28Z) - Conditional Denoising Diffusion for Sequential Recommendation [62.127862728308045]
Two prominent generative models, Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs)
GANs suffer from unstable optimization, while VAEs are prone to posterior collapse and over-smoothed generations.
We present a conditional denoising diffusion model, which includes a sequence encoder, a cross-attentive denoising decoder, and a step-wise diffuser.
arXiv Detail & Related papers (2023-04-22T15:32:59Z) - Inversion by Direct Iteration: An Alternative to Denoising Diffusion for
Image Restoration [22.709205282657617]
Inversion by Direct Iteration (InDI) is a new formulation for supervised image restoration.
It produces more realistic and detailed images than existing regression-based methods.
arXiv Detail & Related papers (2023-03-20T20:28:17Z) - High Perceptual Quality Image Denoising with a Posterior Sampling CGAN [31.42883613312055]
We propose a new approach to image denoising using conditional generative adversarial networks (CGANs)
Our goal is to achieve high perceptual quality with acceptable distortion.
We showcase our proposed method with a novel denoiser architecture that achieves the reformed denoising goal and produces vivid and diverse outcomes in immoderate noise levels.
arXiv Detail & Related papers (2021-03-06T20:18:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.