WaveDM: Wavelet-Based Diffusion Models for Image Restoration
- URL: http://arxiv.org/abs/2305.13819v2
- Date: Thu, 25 Jan 2024 11:49:55 GMT
- Title: WaveDM: Wavelet-Based Diffusion Models for Image Restoration
- Authors: Yi Huang, Jiancheng Huang, Jianzhuang Liu, Mingfu Yan, Yu Dong, Jiaxi
Lv, Chaoqi Chen, Shifeng Chen
- Abstract summary: Wavelet-Based Diffusion Model (WaveDM) learns the distribution of clean images in the wavelet domain conditioned on the wavelet spectrum of degraded images after wavelet transform.
WaveDM achieves state-of-the-art performance with the efficiency that is comparable to traditional one-pass methods.
- Score: 43.254438752311714
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Latest diffusion-based methods for many image restoration tasks outperform
traditional models, but they encounter the long-time inference problem. To
tackle it, this paper proposes a Wavelet-Based Diffusion Model (WaveDM). WaveDM
learns the distribution of clean images in the wavelet domain conditioned on
the wavelet spectrum of degraded images after wavelet transform, which is more
time-saving in each step of sampling than modeling in the spatial domain. To
ensure restoration performance, a unique training strategy is proposed where
the low-frequency and high-frequency spectrums are learned using distinct
modules. In addition, an Efficient Conditional Sampling (ECS) strategy is
developed from experiments, which reduces the number of total sampling steps to
around 5. Evaluations on twelve benchmark datasets including image raindrop
removal, rain steaks removal, dehazing, defocus deblurring, demoir\'eing, and
denoising demonstrate that WaveDM achieves state-of-the-art performance with
the efficiency that is comparable to traditional one-pass methods and over
100$\times$ faster than existing image restoration methods using vanilla
diffusion models.
Related papers
- Multi-scale Generative Modeling for Fast Sampling [38.570968785490514]
In the wavelet domain, we encounter unique challenges, especially the sparse representation of high-frequency coefficients.
We propose a multi-scale generative modeling in the wavelet domain that employs distinct strategies for handling low and high-frequency bands.
As supported by the theoretical analysis and experimental results, our model significantly improve performance and reduce the number of trainable parameters, sampling steps, and time.
arXiv Detail & Related papers (2024-11-14T11:01:45Z) - A Wavelet Diffusion GAN for Image Super-Resolution [7.986370916847687]
Diffusion models have emerged as a superior alternative to generative adversarial networks (GANs) for high-fidelity image generation.
However, their real-time feasibility is hindered by slow training and inference speeds.
This study proposes a wavelet-based conditional Diffusion GAN scheme for Single-Image Super-Resolution.
arXiv Detail & Related papers (2024-10-23T15:34:06Z) - ReNoise: Real Image Inversion Through Iterative Noising [62.96073631599749]
We introduce an inversion method with a high quality-to-operation ratio, enhancing reconstruction accuracy without increasing the number of operations.
We evaluate the performance of our ReNoise technique using various sampling algorithms and models, including recent accelerated diffusion models.
arXiv Detail & Related papers (2024-03-21T17:52:08Z) - Fast Sampling generative model for Ultrasound image reconstruction [3.3545464959630578]
We propose a novel sampling framework that concurrently enforces data consistency of ultrasound signals and data-driven priors.
By leveraging the advanced diffusion model, the generation of high-quality images is substantially expedited.
arXiv Detail & Related papers (2023-12-15T03:28:17Z) - Stage-by-stage Wavelet Optimization Refinement Diffusion Model for
Sparse-View CT Reconstruction [14.037398189132468]
We present an innovative approach named the Stage-by-stage Wavelet Optimization Refinement Diffusion (SWORD) model for sparse-view CT reconstruction.
Specifically, we establish a unified mathematical model integrating low-frequency and high-frequency generative models, achieving the solution with optimization procedure.
Our method rooted in established optimization theory, comprising three distinct stages, including low-frequency generation, high-frequency refinement and domain transform.
arXiv Detail & Related papers (2023-08-30T10:48:53Z) - ACDMSR: Accelerated Conditional Diffusion Models for Single Image
Super-Resolution [84.73658185158222]
We propose a diffusion model-based super-resolution method called ACDMSR.
Our method adapts the standard diffusion model to perform super-resolution through a deterministic iterative denoising process.
Our approach generates more visually realistic counterparts for low-resolution images, emphasizing its effectiveness in practical scenarios.
arXiv Detail & Related papers (2023-07-03T06:49:04Z) - Low-Light Image Enhancement with Wavelet-based Diffusion Models [50.632343822790006]
Diffusion models have achieved promising results in image restoration tasks, yet suffer from time-consuming, excessive computational resource consumption, and unstable restoration.
We propose a robust and efficient Diffusion-based Low-Light image enhancement approach, dubbed DiffLL.
arXiv Detail & Related papers (2023-06-01T03:08:28Z) - ShiftDDPMs: Exploring Conditional Diffusion Models by Shifting Diffusion
Trajectories [144.03939123870416]
We propose a novel conditional diffusion model by introducing conditions into the forward process.
We use extra latent space to allocate an exclusive diffusion trajectory for each condition based on some shifting rules.
We formulate our method, which we call textbfShiftDDPMs, and provide a unified point of view on existing related methods.
arXiv Detail & Related papers (2023-02-05T12:48:21Z) - Diffusion Probabilistic Model Made Slim [128.2227518929644]
We introduce a customized design for slim diffusion probabilistic models (DPM) for light-weight image synthesis.
We achieve 8-18x computational complexity reduction as compared to the latent diffusion models on a series of conditional and unconditional image generation tasks.
arXiv Detail & Related papers (2022-11-27T16:27:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.