Convergence of Deterministic and Stochastic Diffusion-Model Samplers: A Simple Analysis in Wasserstein Distance
- URL: http://arxiv.org/abs/2508.03210v1
- Date: Tue, 05 Aug 2025 08:37:58 GMT
- Title: Convergence of Deterministic and Stochastic Diffusion-Model Samplers: A Simple Analysis in Wasserstein Distance
- Authors: Eliot Beyler, Francis Bach,
- Abstract summary: We provide convergence guarantees in Wasserstein distance for diffusion-based generative models, covering both (DDPM-like) and deterministic (DDIM-like) sampling methods.<n> Notably, we derive the first Wasserstein convergence bound for the Heun sampler and improve existing results for the sampler of the probability flow ODE.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We provide new convergence guarantees in Wasserstein distance for diffusion-based generative models, covering both stochastic (DDPM-like) and deterministic (DDIM-like) sampling methods. We introduce a simple framework to analyze discretization, initialization, and score estimation errors. Notably, we derive the first Wasserstein convergence bound for the Heun sampler and improve existing results for the Euler sampler of the probability flow ODE. Our analysis emphasizes the importance of spatial regularity of the learned score function and argues for controlling the score error with respect to the true reverse process, in line with denoising score matching. We also incorporate recent results on smoothed Wasserstein distances to sharpen initialization error bounds.
Related papers
- The Effect of Stochasticity in Score-Based Diffusion Sampling: a KL Divergence Analysis [0.0]
We study the effect of divergenceity on the generation process through bounds on the Kullback-Leibler (KL)<n>Our main results apply to linear forward SDEs with additive noise and Lipschitz-continuous score functions.
arXiv Detail & Related papers (2025-06-13T01:01:07Z) - Unregularized limit of stochastic gradient method for Wasserstein distributionally robust optimization [8.784017987697688]
Distributionally robust optimization offers a compelling framework for model fitting in machine learning.<n>We investigate the regularized problem where entropic smoothing yields a sampling-based approximation of the original objective.
arXiv Detail & Related papers (2025-06-05T12:21:44Z) - From Score Matching to Diffusion: A Fine-Grained Error Analysis in the Gaussian Setting [25.21429354164613]
We show that the Wasserstein sampling error can be expressed as a kernel-type norm of the data power spectrum.<n>We show that the Wasserstein sampling error can be expressed as a kernel-type norm of the data power spectrum.
arXiv Detail & Related papers (2025-03-14T17:35:00Z) - Advancing Wasserstein Convergence Analysis of Score-Based Models: Insights from Discretization and Second-Order Acceleration [5.548787731232499]
We focus on the Wasserstein convergence analysis of score-based diffusion models.<n>We compare various discretization schemes, including Euler discretization, exponential midpoint and randomization methods.<n>We propose an accelerated sampler based on the local linearization method.
arXiv Detail & Related papers (2025-02-07T11:37:51Z) - Wasserstein Bounds for generative diffusion models with Gaussian tail targets [0.0]
We present an estimate of the Wasserstein distance between the data distribution and the generation of score-based generative models.<n>The complexity bound in dimension is $O(sqrtd)$, with a logarithmic constant.
arXiv Detail & Related papers (2024-12-15T17:20:42Z) - On the Wasserstein Convergence and Straightness of Rectified Flow [54.580605276017096]
Rectified Flow (RF) is a generative model that aims to learn straight flow trajectories from noise to data.<n>We provide a theoretical analysis of the Wasserstein distance between the sampling distribution of RF and the target distribution.<n>We present general conditions guaranteeing uniqueness and straightness of 1-RF, which is in line with previous empirical findings.
arXiv Detail & Related papers (2024-10-19T02:36:11Z) - Unified Convergence Analysis for Score-Based Diffusion Models with Deterministic Samplers [49.1574468325115]
We introduce a unified convergence analysis framework for deterministic samplers.
Our framework achieves iteration complexity of $tilde O(d2/epsilon)$.
We also provide a detailed analysis of Denoising Implicit Diffusion Models (DDIM)-type samplers.
arXiv Detail & Related papers (2024-10-18T07:37:36Z) - Reflected Diffusion Models [93.26107023470979]
We present Reflected Diffusion Models, which reverse a reflected differential equation evolving on the support of the data.
Our approach learns the score function through a generalized score matching loss and extends key components of standard diffusion models.
arXiv Detail & Related papers (2023-04-10T17:54:38Z) - Restoration-Degradation Beyond Linear Diffusions: A Non-Asymptotic
Analysis For DDIM-Type Samplers [90.45898746733397]
We develop a framework for non-asymptotic analysis of deterministic samplers used for diffusion generative modeling.
We show that one step along the probability flow ODE can be expressed as two steps: 1) a restoration step that runs ascent on the conditional log-likelihood at some infinitesimally previous time, and 2) a degradation step that runs the forward process using noise pointing back towards the current gradient.
arXiv Detail & Related papers (2023-03-06T18:59:19Z) - Score-based Continuous-time Discrete Diffusion Models [102.65769839899315]
We extend diffusion models to discrete variables by introducing a Markov jump process where the reverse process denoises via a continuous-time Markov chain.
We show that an unbiased estimator can be obtained via simple matching the conditional marginal distributions.
We demonstrate the effectiveness of the proposed method on a set of synthetic and real-world music and image benchmarks.
arXiv Detail & Related papers (2022-11-30T05:33:29Z) - Nonparametric Score Estimators [49.42469547970041]
Estimating the score from a set of samples generated by an unknown distribution is a fundamental task in inference and learning of probabilistic models.
We provide a unifying view of these estimators under the framework of regularized nonparametric regression.
We propose score estimators based on iterative regularization that enjoy computational benefits from curl-free kernels and fast convergence.
arXiv Detail & Related papers (2020-05-20T15:01:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.