Stochastic Runge-Kutta Methods: Provable Acceleration of Diffusion Models
- URL: http://arxiv.org/abs/2410.04760v1
- Date: Mon, 7 Oct 2024 05:34:51 GMT
- Title: Stochastic Runge-Kutta Methods: Provable Acceleration of Diffusion Models
- Authors: Yuchen Wu, Yuxin Chen, Yuting Wei,
- Abstract summary: We propose and analyze a training-free acceleration algorithm for SDE-style diffusion samplers, based on the KL Runge-Kutta method.
The proposed sampler provably attains $varepsilon2$ error -- measured in divergence -- using $widetilde O(d3/2 / varepsilon)$ score function evaluations.
- Score: 21.961189407928853
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion models play a pivotal role in contemporary generative modeling, claiming state-of-the-art performance across various domains. Despite their superior sample quality, mainstream diffusion-based stochastic samplers like DDPM often require a large number of score function evaluations, incurring considerably higher computational cost compared to single-step generators like generative adversarial networks. While several acceleration methods have been proposed in practice, the theoretical foundations for accelerating diffusion models remain underexplored. In this paper, we propose and analyze a training-free acceleration algorithm for SDE-style diffusion samplers, based on the stochastic Runge-Kutta method. The proposed sampler provably attains $\varepsilon^2$ error -- measured in KL divergence -- using $\widetilde O(d^{3/2} / \varepsilon)$ score function evaluations (for sufficiently small $\varepsilon$), strengthening the state-of-the-art guarantees $\widetilde O(d^{3} / \varepsilon)$ in terms of dimensional dependency. Numerical experiments validate the efficiency of the proposed method.
Related papers
- Provable Acceleration for Diffusion Models under Minimal Assumptions [8.15094483029656]
We propose a novel training-free acceleration scheme for score-based samplers.
Under minimal assumptions, our scheme achieves $varepsilon$-accuracy in total variation within $widetildeO(d5/4/sqrtvarepsilon)$ iterations.
arXiv Detail & Related papers (2024-10-30T17:59:06Z) - Non-asymptotic bounds for forward processes in denoising diffusions: Ornstein-Uhlenbeck is hard to beat [49.1574468325115]
This paper presents explicit non-asymptotic bounds on the forward diffusion error in total variation (TV)
We parametrise multi-modal data distributions in terms of the distance $R$ to their furthest modes and consider forward diffusions with additive and multiplicative noise.
arXiv Detail & Related papers (2024-08-25T10:28:31Z) - Accelerating Convergence of Score-Based Diffusion Models, Provably [44.11766377798812]
Score-based diffusion models often suffer from low sampling speed due to extensive function evaluations needed during the sampling phase.
We design novel training-free algorithms to accelerate popular deterministic (i.e., DDIM) and (i.e., DDPM) samplers.
Our theory accommodates $ell$-accurate score estimates, and does not require log-concavity or smoothness on the target distribution.
arXiv Detail & Related papers (2024-03-06T17:02:39Z) - Gaussian Mixture Solvers for Diffusion Models [84.83349474361204]
We introduce a novel class of SDE-based solvers called GMS for diffusion models.
Our solver outperforms numerous SDE-based solvers in terms of sample quality in image generation and stroke-based synthesis.
arXiv Detail & Related papers (2023-11-02T02:05:38Z) - SA-Solver: Stochastic Adams Solver for Fast Sampling of Diffusion Models [66.67616086310662]
Diffusion Probabilistic Models (DPMs) have achieved considerable success in generation tasks.
As sampling from DPMs is equivalent to solving diffusion SDE or ODE which is time-consuming, numerous fast sampling methods built upon improved differential equation solvers are proposed.
We propose SA-of-r, which is an improved efficient Adams method for solving diffusion SDE to generate data with high quality.
arXiv Detail & Related papers (2023-09-10T12:44:54Z) - Towards Faster Non-Asymptotic Convergence for Diffusion-Based Generative
Models [49.81937966106691]
We develop a suite of non-asymptotic theory towards understanding the data generation process of diffusion models.
In contrast to prior works, our theory is developed based on an elementary yet versatile non-asymptotic approach.
arXiv Detail & Related papers (2023-06-15T16:30:08Z) - A Geometric Perspective on Diffusion Models [57.27857591493788]
We inspect the ODE-based sampling of a popular variance-exploding SDE.
We establish a theoretical relationship between the optimal ODE-based sampling and the classic mean-shift (mode-seeking) algorithm.
arXiv Detail & Related papers (2023-05-31T15:33:16Z) - Towards Controllable Diffusion Models via Reward-Guided Exploration [15.857464051475294]
We propose a novel framework that guides the training-phase of diffusion models via reinforcement learning (RL)
RL enables calculating policy gradients via samples from a pay-off distribution proportional to exponential scaled rewards, rather than from policies themselves.
Experiments on 3D shape and molecule generation tasks show significant improvements over existing conditional diffusion models.
arXiv Detail & Related papers (2023-04-14T13:51:26Z) - How Much is Enough? A Study on Diffusion Times in Score-based Generative
Models [76.76860707897413]
Current best practice advocates for a large T to ensure that the forward dynamics brings the diffusion sufficiently close to a known and simple noise distribution.
We show how an auxiliary model can be used to bridge the gap between the ideal and the simulated forward dynamics, followed by a standard reverse diffusion process.
arXiv Detail & Related papers (2022-06-10T15:09:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.