Diffusion Probabilistic Model Made Slim
- URL: http://arxiv.org/abs/2211.17106v1
- Date: Sun, 27 Nov 2022 16:27:28 GMT
- Title: Diffusion Probabilistic Model Made Slim
- Authors: Xingyi Yang, Daquan Zhou, Jiashi Feng, Xinchao Wang
- Abstract summary: We introduce a customized design for slim diffusion probabilistic models (DPM) for light-weight image synthesis.
We achieve 8-18x computational complexity reduction as compared to the latent diffusion models on a series of conditional and unconditional image generation tasks.
- Score: 128.2227518929644
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite the recent visually-pleasing results achieved, the massive
computational cost has been a long-standing flaw for diffusion probabilistic
models (DPMs), which, in turn, greatly limits their applications on
resource-limited platforms. Prior methods towards efficient DPM, however, have
largely focused on accelerating the testing yet overlooked their huge
complexity and sizes. In this paper, we make a dedicated attempt to lighten DPM
while striving to preserve its favourable performance. We start by training a
small-sized latent diffusion model (LDM) from scratch, but observe a
significant fidelity drop in the synthetic images. Through a thorough
assessment, we find that DPM is intrinsically biased against high-frequency
generation, and learns to recover different frequency components at different
time-steps. These properties make compact networks unable to represent
frequency dynamics with accurate high-frequency estimation. Towards this end,
we introduce a customized design for slim DPM, which we term as Spectral
Diffusion (SD), for light-weight image synthesis. SD incorporates wavelet
gating in its architecture to enable frequency dynamic feature extraction at
every reverse steps, and conducts spectrum-aware distillation to promote
high-frequency recovery by inverse weighting the objective based on spectrum
magni tudes. Experimental results demonstrate that, SD achieves 8-18x
computational complexity reduction as compared to the latent diffusion models
on a series of conditional and unconditional image generation tasks while
retaining competitive image fidelity.
Related papers
- Memory-Efficient Fine-Tuning for Quantized Diffusion Model [12.875837358532422]
We introduce TuneQDM, a memory-efficient fine-tuning method for quantized diffusion models.
Our method consistently outperforms the baseline in both single-/multi-subject generations.
arXiv Detail & Related papers (2024-01-09T03:42:08Z) - Speeding up Photoacoustic Imaging using Diffusion Models [0.0]
Photoacoustic Microscopy (PAM) integrates optical and acoustic imaging, offering enhanced penetration depth for detecting optical-absorbing components in tissues.
With speed limitations imposed by laser pulse repetition rates, the potential role of computational methods is highlighted in accelerating PAM imaging.
We are proposing a novel and highly adaptable DiffPam algorithm that utilizes diffusion models for speeding up the PAM imaging process.
arXiv Detail & Related papers (2023-12-14T11:34:27Z) - Fast Diffusion Model [122.36693015093041]
Diffusion models (DMs) have been adopted across diverse fields with their abilities in capturing intricate data distributions.
In this paper, we propose a Fast Diffusion Model (FDM) to significantly speed up DMs from a DM optimization perspective.
arXiv Detail & Related papers (2023-06-12T09:38:04Z) - Low-Light Image Enhancement with Wavelet-based Diffusion Models [50.632343822790006]
Diffusion models have achieved promising results in image restoration tasks, yet suffer from time-consuming, excessive computational resource consumption, and unstable restoration.
We propose a robust and efficient Diffusion-based Low-Light image enhancement approach, dubbed DiffLL.
arXiv Detail & Related papers (2023-06-01T03:08:28Z) - WaveDM: Wavelet-Based Diffusion Models for Image Restoration [43.254438752311714]
Wavelet-Based Diffusion Model (WaveDM) learns the distribution of clean images in the wavelet domain conditioned on the wavelet spectrum of degraded images after wavelet transform.
WaveDM achieves state-of-the-art performance with the efficiency that is comparable to traditional one-pass methods.
arXiv Detail & Related papers (2023-05-23T08:41:04Z) - Q-Diffusion: Quantizing Diffusion Models [52.978047249670276]
Post-training quantization (PTQ) is considered a go-to compression method for other tasks.
We propose a novel PTQ method specifically tailored towards the unique multi-timestep pipeline and model architecture.
We show that our proposed method is able to quantize full-precision unconditional diffusion models into 4-bit while maintaining comparable performance.
arXiv Detail & Related papers (2023-02-08T19:38:59Z) - Accelerating Score-based Generative Models with Preconditioned Diffusion
Sampling [36.02321871608158]
We propose a model-agnostic preconditioned diffusion sampling (PDS) method that leverages matrix preconditioning to alleviate the problem.
PDS consistently accelerates off-the-shelf SGMs whilst maintaining the synthesis quality.
In particular, PDS can accelerate by up to 29x on more challenging high resolution (1024x1024) image generation.
arXiv Detail & Related papers (2022-07-05T17:55:42Z) - Accelerating Diffusion Models via Early Stop of the Diffusion Process [114.48426684994179]
Denoising Diffusion Probabilistic Models (DDPMs) have achieved impressive performance on various generation tasks.
In practice DDPMs often need hundreds even thousands of denoising steps to obtain a high-quality sample.
We propose a principled acceleration strategy, referred to as Early-Stopped DDPM (ES-DDPM), for DDPMs.
arXiv Detail & Related papers (2022-05-25T06:40:09Z) - Denoising Diffusion Implicit Models [117.03720513930335]
We present denoising diffusion implicit models (DDIMs) for iterative implicit probabilistic models with the same training procedure as DDPMs.
DDIMs can produce high quality samples $10 times$ to $50 times$ faster in terms of wall-clock time compared to DDPMs.
arXiv Detail & Related papers (2020-10-06T06:15:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.