Towards Fast Stochastic Sampling in Diffusion Generative Models
- URL: http://arxiv.org/abs/2402.07211v2
- Date: Tue, 13 Feb 2024 07:14:24 GMT
- Title: Towards Fast Stochastic Sampling in Diffusion Generative Models
- Authors: Kushagra Pandey, Maja Rudolph, Stephan Mandt
- Abstract summary: Diffusion models suffer from slow sample generation at inference time.
We propose Splittings for fast sampling in pre-trained diffusion models in augmented spaces.
We show that a naive application of splitting is sub-optimal for fast sampling.
- Score: 22.01769257075573
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Diffusion models suffer from slow sample generation at inference time.
Despite recent efforts, improving the sampling efficiency of stochastic
samplers for diffusion models remains a promising direction. We propose
Splitting Integrators for fast stochastic sampling in pre-trained diffusion
models in augmented spaces. Commonly used in molecular dynamics,
splitting-based integrators attempt to improve sampling efficiency by cleverly
alternating between numerical updates involving the data, auxiliary, or noise
variables. However, we show that a naive application of splitting integrators
is sub-optimal for fast sampling. Consequently, we propose several principled
modifications to naive splitting samplers for improving sampling efficiency and
denote the resulting samplers as Reduced Splitting Integrators. In the context
of Phase Space Langevin Diffusion (PSLD) [Pandey \& Mandt, 2023] on CIFAR-10,
our stochastic sampler achieves an FID score of 2.36 in only 100 network
function evaluations (NFE) as compared to 2.63 for the best baselines.
Related papers
- Single-Step Consistent Diffusion Samplers [8.758218443992467]
Existing sampling algorithms typically require many iterative steps to produce high-quality samples.
We introduce consistent diffusion samplers, a new class of samplers designed to generate high-fidelity samples in a single step.
We show that our approach yields high-fidelity samples using less than 1% of the network evaluations required by traditional diffusion samplers.
arXiv Detail & Related papers (2025-02-11T14:25:52Z) - Neural Flow Samplers with Shortcut Models [19.81513273510523]
Flow-based samplers generate samples by learning a velocity field that satisfies the continuity equation.
While importance sampling provides an approximation, it suffers from high variance.
arXiv Detail & Related papers (2025-02-11T07:55:41Z) - Self-Refining Diffusion Samplers: Enabling Parallelization via Parareal Iterations [53.180374639531145]
Self-Refining Diffusion Samplers (SRDS) retain sample quality and can improve latency at the cost of additional parallel compute.
We take inspiration from the Parareal algorithm, a popular numerical method for parallel-in-time integration of differential equations.
arXiv Detail & Related papers (2024-12-11T11:08:09Z) - DC-Solver: Improving Predictor-Corrector Diffusion Sampler via Dynamic Compensation [68.55191764622525]
Diffusion models (DPMs) have shown remarkable performance in visual synthesis but are computationally expensive due to the need for multiple evaluations during the sampling.
Recent predictor synthesis-or diffusion samplers have significantly reduced the required number of evaluations, but inherently suffer from a misalignment issue.
We introduce a new fast DPM sampler called DC-CPRr, which leverages dynamic compensation to mitigate the misalignment.
arXiv Detail & Related papers (2024-09-05T17:59:46Z) - Boosting Diffusion Models with Moving Average Sampling in Frequency Domain [101.43824674873508]
Diffusion models rely on the current sample to denoise the next one, possibly resulting in denoising instability.
In this paper, we reinterpret the iterative denoising process as model optimization and leverage a moving average mechanism to ensemble all the prior samples.
We name the complete approach "Moving Average Sampling in Frequency domain (MASF)"
arXiv Detail & Related papers (2024-03-26T16:57:55Z) - Iterated Denoising Energy Matching for Sampling from Boltzmann Densities [109.23137009609519]
Iterated Denoising Energy Matching (iDEM)
iDEM alternates between (I) sampling regions of high model density from a diffusion-based sampler and (II) using these samples in our matching objective.
We show that the proposed approach achieves state-of-the-art performance on all metrics and trains $2-5times$ faster.
arXiv Detail & Related papers (2024-02-09T01:11:23Z) - Sampler Scheduler for Diffusion Models [0.0]
Diffusion modeling (DM) has high-quality generative performance.
Currently, there is a contradiction in samplers for diffusion-based generative models.
We propose the feasibility of using different samplers (ODE/SDE) on different sampling steps of the same sampling process.
arXiv Detail & Related papers (2023-11-12T13:35:25Z) - Efficient Integrators for Diffusion Generative Models [22.01769257075573]
Diffusion models suffer from slow sample generation at inference time.
We propose two complementary frameworks for accelerating sample generation in pre-trained models.
We present a hybrid method that leads to the best-reported performance for diffusion models in augmented spaces.
arXiv Detail & Related papers (2023-10-11T21:04:42Z) - Semi-Implicit Denoising Diffusion Models (SIDDMs) [50.30163684539586]
Existing models such as Denoising Diffusion Probabilistic Models (DDPM) deliver high-quality, diverse samples but are slowed by an inherently high number of iterative steps.
We introduce a novel approach that tackles the problem by matching implicit and explicit factors.
We demonstrate that our proposed method obtains comparable generative performance to diffusion-based models and vastly superior results to models with a small number of sampling steps.
arXiv Detail & Related papers (2023-06-21T18:49:22Z) - Parallel Sampling of Diffusion Models [76.3124029406809]
Diffusion models are powerful generative models but suffer from slow sampling.
We present ParaDiGMS, a novel method to accelerate the sampling of pretrained diffusion models by denoising multiple steps in parallel.
arXiv Detail & Related papers (2023-05-25T17:59:42Z) - Learning Fast Samplers for Diffusion Models by Differentiating Through
Sample Quality [44.37533757879762]
We introduce Differentiable Diffusion Sampler Search (DDSS), a method that optimize fast samplers for any pre-trained diffusion model.
We also present Generalized Gaussian Diffusion Models (GGDM), a family of flexible non-Markovian samplers for diffusion models.
Our method is compatible with any pre-trained diffusion model without fine-tuning or re-training required.
arXiv Detail & Related papers (2022-02-11T18:53:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.