Diffusion Path Samplers via Sequential Monte Carlo
- URL: http://arxiv.org/abs/2601.21951v1
- Date: Thu, 29 Jan 2026 16:32:12 GMT
- Title: Diffusion Path Samplers via Sequential Monte Carlo
- Authors: James Matthew Young, Paula Cordero-Encinar, Sebastian Reich, Andrew Duncan, O. Deniz Akyildiz,
- Abstract summary: We develop a diffusion-based sampler for target distributions known up to a normalising constant.<n>Our approach is based on a practical implementation of diffusion-annealed Langevin Monte Carlo.<n>We provide theoretical guarantees and empirically demonstrate the effectiveness of our method on several synthetic and real-world datasets.
- Score: 0.31690235522182103
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We develop a diffusion-based sampler for target distributions known up to a normalising constant. To this end, we rely on the well-known diffusion path that smoothly interpolates between a (simple) base distribution and the target distribution, widely used in diffusion models. Our approach is based on a practical implementation of diffusion-annealed Langevin Monte Carlo, which approximates the diffusion path with convergence guarantees. We tackle the score estimation problem by developing an efficient sequential Monte Carlo sampler that evolves auxiliary variables from conditional distributions along the path, which provides principled score estimates for time-varying distributions. We further develop novel control variate schedules that minimise the variance of these score estimates. Finally, we provide theoretical guarantees and empirically demonstrate the effectiveness of our method on several synthetic and real-world datasets.
Related papers
- Effective Test-Time Scaling of Discrete Diffusion through Iterative Refinement [51.54933696252104]
We introduce Iterative Reward-Guided Refinement (IterRef), a novel test-time scaling method tailored to discrete diffusion.<n>We formalize this process within a Multiple-Try Metropolis framework, proving convergence to the reward-aligned distribution.<n>IterRef achieves striking gains under low compute budgets, far surpassing prior state-of-the-art baselines.
arXiv Detail & Related papers (2025-11-04T02:33:23Z) - Non-asymptotic Analysis of Diffusion Annealed Langevin Monte Carlo for Generative Modelling [1.9526430269580959]
We provide non-asymptotic error bounds for the Langevin dynamics where the path of distributions is defined as Gaussian convolutions of the data distribution as in diffusion models.<n>We then extend our results to recently proposed heavy-tailed (Student's t) diffusion paths, demonstrating their theoretical properties for heavy-tailed data distributions for the first time.
arXiv Detail & Related papers (2025-02-13T13:18:30Z) - Debiasing Guidance for Discrete Diffusion with Sequential Monte Carlo [15.333834240761048]
We introduce a Sequential Monte Carlo algorithm that generates unbiasedly from a target distribution.<n>We validate our approach on low-dimensional distributions, controlled images and text generations.
arXiv Detail & Related papers (2025-02-10T00:27:54Z) - Sampling in High-Dimensions using Stochastic Interpolants and Forward-Backward Stochastic Differential Equations [8.509310102094512]
We present a class of diffusion-based algorithms to draw samples from high-dimensional probability distributions.<n>Our approach relies on the interpolants framework to define a time-indexed collection of probability densities.<n>We demonstrate that our algorithm can effectively draw samples from distributions that conventional methods struggle to handle.
arXiv Detail & Related papers (2025-02-01T07:27:11Z) - Learned Reference-based Diffusion Sampling for multi-modal distributions [2.1383136715042417]
We introduce Learned Reference-based Diffusion Sampler (LRDS), a methodology specifically designed to leverage prior knowledge on the location of the target modes.<n>LRDS proceeds in two steps by learning a reference diffusion model on samples located in high-density space regions.<n>We experimentally demonstrate that LRDS best exploits prior knowledge on the target distribution compared to competing algorithms on a variety of challenging distributions.
arXiv Detail & Related papers (2024-10-25T10:23:34Z) - Theory on Score-Mismatched Diffusion Models and Zero-Shot Conditional Samplers [49.97755400231656]
We present the first performance guarantee with explicit dimensional dependencies for general score-mismatched diffusion samplers.<n>We show that score mismatches result in an distributional bias between the target and sampling distributions, proportional to the accumulated mismatch between the target and training distributions.<n>This result can be directly applied to zero-shot conditional samplers for any conditional model, irrespective of measurement noise.
arXiv Detail & Related papers (2024-10-17T16:42:12Z) - Broadening Target Distributions for Accelerated Diffusion Models via a Novel Analysis Approach [49.97755400231656]
We show that a new accelerated DDPM sampler achieves accelerated performance for three broad distribution classes not considered before.<n>Our results show an improved dependency on the data dimension $d$ among accelerated DDPM type samplers.
arXiv Detail & Related papers (2024-02-21T16:11:47Z) - Uncertainty Quantification via Stable Distribution Propagation [60.065272548502]
We propose a new approach for propagating stable probability distributions through neural networks.
Our method is based on local linearization, which we show to be an optimal approximation in terms of total variation distance for the ReLU non-linearity.
arXiv Detail & Related papers (2024-02-13T09:40:19Z) - Improved off-policy training of diffusion samplers [93.66433483772055]
We study the problem of training diffusion models to sample from a distribution with an unnormalized density or energy function.<n>We benchmark several diffusion-structured inference methods, including simulation-based variational approaches and off-policy methods.<n>Our results shed light on the relative advantages of existing algorithms while bringing into question some claims from past work.
arXiv Detail & Related papers (2024-02-07T18:51:49Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Adaptive Annealed Importance Sampling with Constant Rate Progress [68.8204255655161]
Annealed Importance Sampling (AIS) synthesizes weighted samples from an intractable distribution.
We propose the Constant Rate AIS algorithm and its efficient implementation for $alpha$-divergences.
arXiv Detail & Related papers (2023-06-27T08:15:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.