Towards Fast Stochastic Sampling in Diffusion Generative Models
- URL: http://arxiv.org/abs/2402.07211v2
- Date: Tue, 13 Feb 2024 07:14:24 GMT
- Title: Towards Fast Stochastic Sampling in Diffusion Generative Models
- Authors: Kushagra Pandey, Maja Rudolph, Stephan Mandt
- Abstract summary: Diffusion models suffer from slow sample generation at inference time.
We propose Splittings for fast sampling in pre-trained diffusion models in augmented spaces.
We show that a naive application of splitting is sub-optimal for fast sampling.
- Score: 22.01769257075573
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Diffusion models suffer from slow sample generation at inference time.
Despite recent efforts, improving the sampling efficiency of stochastic
samplers for diffusion models remains a promising direction. We propose
Splitting Integrators for fast stochastic sampling in pre-trained diffusion
models in augmented spaces. Commonly used in molecular dynamics,
splitting-based integrators attempt to improve sampling efficiency by cleverly
alternating between numerical updates involving the data, auxiliary, or noise
variables. However, we show that a naive application of splitting integrators
is sub-optimal for fast sampling. Consequently, we propose several principled
modifications to naive splitting samplers for improving sampling efficiency and
denote the resulting samplers as Reduced Splitting Integrators. In the context
of Phase Space Langevin Diffusion (PSLD) [Pandey \& Mandt, 2023] on CIFAR-10,
our stochastic sampler achieves an FID score of 2.36 in only 100 network
function evaluations (NFE) as compared to 2.63 for the best baselines.
Related papers
- DC-Solver: Improving Predictor-Corrector Diffusion Sampler via Dynamic Compensation [68.55191764622525]
Diffusion models (DPMs) have shown remarkable performance in visual synthesis but are computationally expensive due to the need for multiple evaluations during the sampling.
Recent predictor synthesis-or diffusion samplers have significantly reduced the required number of evaluations, but inherently suffer from a misalignment issue.
We introduce a new fast DPM sampler called DC-CPRr, which leverages dynamic compensation to mitigate the misalignment.
arXiv Detail & Related papers (2024-09-05T17:59:46Z) - Diffusion Rejection Sampling [13.945372555871414]
Diffusion Rejection Sampling (DiffRS) is a rejection sampling scheme that aligns the sampling transition kernels with the true ones at each timestep.
The proposed method can be viewed as a mechanism that evaluates the quality of samples at each intermediate timestep and refines them with varying effort depending on the sample.
Empirical results demonstrate the state-of-the-art performance of DiffRS on the benchmark datasets and the effectiveness of DiffRS for fast diffusion samplers and large-scale text-to-image diffusion models.
arXiv Detail & Related papers (2024-05-28T07:00:28Z) - Score-based Generative Models with Adaptive Momentum [40.84399531998246]
We propose an adaptive momentum sampling method to accelerate the transforming process.
We show that our method can produce more faithful images/graphs in small sampling steps with 2 to 5 times speed up.
arXiv Detail & Related papers (2024-05-22T15:20:27Z) - Boosting Diffusion Models with Moving Average Sampling in Frequency Domain [101.43824674873508]
Diffusion models rely on the current sample to denoise the next one, possibly resulting in denoising instability.
In this paper, we reinterpret the iterative denoising process as model optimization and leverage a moving average mechanism to ensemble all the prior samples.
We name the complete approach "Moving Average Sampling in Frequency domain (MASF)"
arXiv Detail & Related papers (2024-03-26T16:57:55Z) - Iterated Denoising Energy Matching for Sampling from Boltzmann Densities [109.23137009609519]
Iterated Denoising Energy Matching (iDEM)
iDEM alternates between (I) sampling regions of high model density from a diffusion-based sampler and (II) using these samples in our matching objective.
We show that the proposed approach achieves state-of-the-art performance on all metrics and trains $2-5times$ faster.
arXiv Detail & Related papers (2024-02-09T01:11:23Z) - Sampler Scheduler for Diffusion Models [0.0]
Diffusion modeling (DM) has high-quality generative performance.
Currently, there is a contradiction in samplers for diffusion-based generative models.
We propose the feasibility of using different samplers (ODE/SDE) on different sampling steps of the same sampling process.
arXiv Detail & Related papers (2023-11-12T13:35:25Z) - Efficient Integrators for Diffusion Generative Models [22.01769257075573]
Diffusion models suffer from slow sample generation at inference time.
We propose two complementary frameworks for accelerating sample generation in pre-trained models.
We present a hybrid method that leads to the best-reported performance for diffusion models in augmented spaces.
arXiv Detail & Related papers (2023-10-11T21:04:42Z) - Semi-Implicit Denoising Diffusion Models (SIDDMs) [50.30163684539586]
Existing models such as Denoising Diffusion Probabilistic Models (DDPM) deliver high-quality, diverse samples but are slowed by an inherently high number of iterative steps.
We introduce a novel approach that tackles the problem by matching implicit and explicit factors.
We demonstrate that our proposed method obtains comparable generative performance to diffusion-based models and vastly superior results to models with a small number of sampling steps.
arXiv Detail & Related papers (2023-06-21T18:49:22Z) - Parallel Sampling of Diffusion Models [76.3124029406809]
Diffusion models are powerful generative models but suffer from slow sampling.
We present ParaDiGMS, a novel method to accelerate the sampling of pretrained diffusion models by denoising multiple steps in parallel.
arXiv Detail & Related papers (2023-05-25T17:59:42Z) - Learning Fast Samplers for Diffusion Models by Differentiating Through
Sample Quality [44.37533757879762]
We introduce Differentiable Diffusion Sampler Search (DDSS), a method that optimize fast samplers for any pre-trained diffusion model.
We also present Generalized Gaussian Diffusion Models (GGDM), a family of flexible non-Markovian samplers for diffusion models.
Our method is compatible with any pre-trained diffusion model without fine-tuning or re-training required.
arXiv Detail & Related papers (2022-02-11T18:53:18Z) - Denoising Diffusion Implicit Models [117.03720513930335]
We present denoising diffusion implicit models (DDIMs) for iterative implicit probabilistic models with the same training procedure as DDPMs.
DDIMs can produce high quality samples $10 times$ to $50 times$ faster in terms of wall-clock time compared to DDPMs.
arXiv Detail & Related papers (2020-10-06T06:15:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.