Initialization-Aware Score-Based Diffusion Sampling
- URL: http://arxiv.org/abs/2603.00772v1
- Date: Sat, 28 Feb 2026 18:37:10 GMT
- Title: Initialization-Aware Score-Based Diffusion Sampling
- Authors: Tiziano Fassina, Gabriel Cardoso, Sylvan Le Corff, Thomas Romary,
- Abstract summary: classical samplers from a Gaussian distribution require a long time horizon noising typically inducing a large number of discretization steps and high computational cost.<n>We present a Kullback-Leibler convergence analysis of Vari Exploding diffusion samplers that highlights the critical role of the backward process.<n>Experiments on toy distributions and benchmark datasets demonstrate competitive or improved generative quality while using significantly fewer sampling steps.
- Score: 2.554905387213586
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Score-based generative models (SGMs) aim at generating samples from a target distribution by approximating the reverse-time dynamics of a stochastic differential equation. Despite their strong empirical performance, classical samplers initialized from a Gaussian distribution require a long time horizon noising typically inducing a large number of discretization steps and high computational cost. In this work, we present a Kullback-Leibler convergence analysis of Variance Exploding diffusion samplers that highlights the critical role of the backward process initialization. Based on this result, we propose a theoretically grounded sampling strategy that learns the reverse-time initialization, directly minimizing the initialization error. The resulting procedure is independent of the specific score training procedure, network architecture, and discretization scheme. Experiments on toy distributions and benchmark datasets demonstrate competitive or improved generative quality while using significantly fewer sampling steps.
Related papers
- An Elementary Approach to Scheduling in Generative Diffusion Models [55.171367482496755]
An elementary approach to characterizing the impact of noise scheduling and time discretization in generative diffusion models is developed.<n> Experiments across different datasets and pretrained models demonstrate that the time discretization strategy selected by our approach consistently outperforms baseline and search-based strategies.
arXiv Detail & Related papers (2026-01-20T05:06:26Z) - One-Step Diffusion Samplers via Self-Distillation and Deterministic Flow [24.67443222055996]
Existing sampling algorithms typically require many iterative steps to produce high-quality samples.<n>We introduce one-step diffusion samplers which learn a step-conditioned ODE.<n>We show that standard ELBO estimates degrade in the few-step regime because common discrete yield mismatched forward/backward transition kernels.
arXiv Detail & Related papers (2025-12-04T20:57:53Z) - Inference-Time Scaling of Diffusion Language Models with Particle Gibbs Sampling [70.8832906871441]
We study how to steer generation toward desired rewards without retraining the models.<n>Prior methods typically resample or filter within a single denoising trajectory, optimizing rewards step-by-step without trajectory-level refinement.<n>We introduce particle Gibbs sampling for diffusion language models (PG-DLM), a novel inference-time algorithm enabling trajectory-level refinement while preserving generation perplexity.
arXiv Detail & Related papers (2025-07-11T08:00:47Z) - End-To-End Learning of Gaussian Mixture Priors for Diffusion Sampler [15.372235873766812]
Learnable mixture priors offer improved control over exploration, adaptability to target support, and increased to counteract mode collapse.<n>Our experimental results demonstrate significant performance improvements across a diverse range of real-world and synthetic benchmark problems.
arXiv Detail & Related papers (2025-03-01T14:58:14Z) - Single-Step Consistent Diffusion Samplers [8.758218443992467]
Existing sampling algorithms typically require many iterative steps to produce high-quality samples.<n>We introduce consistent diffusion samplers, a new class of samplers designed to generate high-fidelity samples in a single step.<n>We show that our approach yields high-fidelity samples using less than 1% of the network evaluations required by traditional diffusion samplers.
arXiv Detail & Related papers (2025-02-11T14:25:52Z) - Distributional Diffusion Models with Scoring Rules [83.38210785728994]
Diffusion models generate high-quality synthetic data.<n> generating high-quality outputs requires many discretization steps.<n>We propose to accomplish sample generation by learning the posterior em distribution of clean data samples.
arXiv Detail & Related papers (2025-02-04T16:59:03Z) - Unified Convergence Analysis for Score-Based Diffusion Models with Deterministic Samplers [49.1574468325115]
We introduce a unified convergence analysis framework for deterministic samplers.
Our framework achieves iteration complexity of $tilde O(d2/epsilon)$.
We also provide a detailed analysis of Denoising Implicit Diffusion Models (DDIM)-type samplers.
arXiv Detail & Related papers (2024-10-18T07:37:36Z) - On the Trajectory Regularity of ODE-based Diffusion Sampling [79.17334230868693]
Diffusion-based generative models use differential equations to establish a smooth connection between a complex data distribution and a tractable prior distribution.
In this paper, we identify several intriguing trajectory properties in the ODE-based sampling process of diffusion models.
arXiv Detail & Related papers (2024-05-18T15:59:41Z) - Efficiently Sampling Functions from Gaussian Process Posteriors [76.94808614373609]
We propose an easy-to-use and general-purpose approach for fast posterior sampling.
We demonstrate how decoupled sample paths accurately represent Gaussian process posteriors at a fraction of the usual cost.
arXiv Detail & Related papers (2020-02-21T14:03:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.