Stochastic Localization via Iterative Posterior Sampling
- URL: http://arxiv.org/abs/2402.10758v2
- Date: Tue, 28 May 2024 12:05:08 GMT
- Title: Stochastic Localization via Iterative Posterior Sampling
- Authors: Louis Grenioux, Maxence Noble, Marylou GabriƩ, Alain Oliviero Durmus,
- Abstract summary: We consider a general localization framework and introduce an explicit class of observation processes, associated with flexible denoising schedules.
We provide a complete methodology, $textitStochastic localization via Iterative Posterior Sampling$ (SLIPS), to obtain approximate samples of this dynamics, and as a byproduct, samples from the target distribution.
We illustrate the benefits and applicability of SLIPS on several benchmarks of multi-modal distributions, including mixtures in increasing dimensions, logistic regression and high-dimensional field system from statistical-mechanics.
- Score: 2.1383136715042417
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Building upon score-based learning, new interest in stochastic localization techniques has recently emerged. In these models, one seeks to noise a sample from the data distribution through a stochastic process, called observation process, and progressively learns a denoiser associated to this dynamics. Apart from specific applications, the use of stochastic localization for the problem of sampling from an unnormalized target density has not been explored extensively. This work contributes to fill this gap. We consider a general stochastic localization framework and introduce an explicit class of observation processes, associated with flexible denoising schedules. We provide a complete methodology, $\textit{Stochastic Localization via Iterative Posterior Sampling}$ (SLIPS), to obtain approximate samples of this dynamics, and as a by-product, samples from the target distribution. Our scheme is based on a Markov chain Monte Carlo estimation of the denoiser and comes with detailed practical guidelines. We illustrate the benefits and applicability of SLIPS on several benchmarks of multi-modal distributions, including Gaussian mixtures in increasing dimensions, Bayesian logistic regression and a high-dimensional field system from statistical-mechanics.
Related papers
- Initialization-Aware Score-Based Diffusion Sampling [2.554905387213586]
classical samplers from a Gaussian distribution require a long time horizon noising typically inducing a large number of discretization steps and high computational cost.<n>We present a Kullback-Leibler convergence analysis of Vari Exploding diffusion samplers that highlights the critical role of the backward process.<n>Experiments on toy distributions and benchmark datasets demonstrate competitive or improved generative quality while using significantly fewer sampling steps.
arXiv Detail & Related papers (2026-02-28T18:37:10Z) - Sampling from multi-modal distributions on Riemannian manifolds with training-free stochastic interpolants [17.07401986649233]
We introduce a sampling algorithm based on the simulation of a non-equilibrium deterministic dynamics that transports an easy-to-sample noise distribution toward the target.<n>In contrast to related generative modeling approaches that rely on machine learning, our method is entirely training-free.
arXiv Detail & Related papers (2026-01-31T10:17:44Z) - Combating Noisy Labels through Fostering Self- and Neighbor-Consistency [120.4394402099635]
Label noise is pervasive in various real-world scenarios, posing challenges in supervised deep learning.<n>We propose a noise-robust method named Jo-SNC (textbfJoint sample selection and model regularization based on textbfSelf- and textbfNeighbor-textbfConsistency)<n>We design a self-adaptive, data-driven thresholding scheme to adjust per-class selection thresholds.
arXiv Detail & Related papers (2026-01-19T07:55:29Z) - Reinforced sequential Monte Carlo for amortised sampling [49.92678178064033]
We state a connection between sequential Monte Carlo (SMC) and neural sequential samplers trained by maximum-entropy reinforcement learning (MaxEnt RL)<n>We describe techniques for stable joint training of proposals and twist functions and an adaptive weight tempering scheme to reduce training signal variance.
arXiv Detail & Related papers (2025-10-13T17:59:11Z) - Sampling by averaging: A multiscale approach to score estimation [4.003851730099099]
We introduce a novel framework for efficient sampling from complex, unnormalised target distributions by exploiting multiscale dynamics.<n>Two algorithms are developed: MultALMC and MultCDiff, based on multiscale controlled diffusions for the reverse-time Ornstein-Uhlenbeck process.<n>The framework is extended to handle heavy-dimensional target distributions using Student's t-based noise models and tailored fast-process dynamics.
arXiv Detail & Related papers (2025-08-20T21:09:34Z) - Inference-Time Scaling of Diffusion Language Models with Particle Gibbs Sampling [70.8832906871441]
We study how to steer generation toward desired rewards without retraining the models.<n>Prior methods typically resample or filter within a single denoising trajectory, optimizing rewards step-by-step without trajectory-level refinement.<n>We introduce particle Gibbs sampling for diffusion language models (PG-DLM), a novel inference-time algorithm enabling trajectory-level refinement while preserving generation perplexity.
arXiv Detail & Related papers (2025-07-11T08:00:47Z) - Non-equilibrium Annealed Adjoint Sampler [27.73022309947818]
We introduce the textbfNon-equilibrium Annealed Adjoint Sampler (NAAS), a novel SOC-based diffusion sampler.<n> NAAS employs a lean adjoint system inspired by adjoint matching, enabling efficient and scalable training.
arXiv Detail & Related papers (2025-06-22T20:41:31Z) - Beyond Log-Concavity and Score Regularity: Improved Convergence Bounds for Score-Based Generative Models in W2-distance [0.0]
We present a novel framework for analyzing convergence in Score-based Generative Models (SGMs)
We show that weak log-concavity of the data distribution evolves into log-concavity over time.
Our approach circumvents the need for stringent regularity conditions on the score function and its regularity.
arXiv Detail & Related papers (2025-01-04T14:33:27Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Sampling, Diffusions, and Stochastic Localization [10.368585938419619]
Diffusions are a successful technique to sample from high-dimensional distributions.
localization is a technique to prove mixing of Markov Chains and other functional inequalities in high dimension.
An algorithmic version of localization was introduced in [EAMS2022] to obtain an algorithm that samples from certain statistical mechanics models.
arXiv Detail & Related papers (2023-05-18T04:01:40Z) - Score-based Continuous-time Discrete Diffusion Models [102.65769839899315]
We extend diffusion models to discrete variables by introducing a Markov jump process where the reverse process denoises via a continuous-time Markov chain.
We show that an unbiased estimator can be obtained via simple matching the conditional marginal distributions.
We demonstrate the effectiveness of the proposed method on a set of synthetic and real-world music and image benchmarks.
arXiv Detail & Related papers (2022-11-30T05:33:29Z) - Approximate sampling and estimation of partition functions using neural
networks [0.0]
We show how variational autoencoders (VAEs) can be applied to this task.
We invert the logic and train the VAE to fit a simple and tractable distribution, on the assumption of a complex and intractable latent distribution, specified up to normalization.
This procedure constructs approximations without the use of training data or Markov chain Monte Carlo sampling.
arXiv Detail & Related papers (2022-09-21T15:16:45Z) - Markov Chain Monte Carlo for Continuous-Time Switching Dynamical Systems [26.744964200606784]
We propose a novel inference algorithm utilizing a Markov Chain Monte Carlo approach.
The presented Gibbs sampler allows to efficiently obtain samples from the exact continuous-time posterior processes.
arXiv Detail & Related papers (2022-05-18T09:03:00Z) - Oops I Took A Gradient: Scalable Sampling for Discrete Distributions [53.3142984019796]
We show that this approach outperforms generic samplers in a number of difficult settings.
We also demonstrate the use of our improved sampler for training deep energy-based models on high dimensional discrete data.
arXiv Detail & Related papers (2021-02-08T20:08:50Z) - Sampling in Combinatorial Spaces with SurVAE Flow Augmented MCMC [83.48593305367523]
Hybrid Monte Carlo is a powerful Markov Chain Monte Carlo method for sampling from complex continuous distributions.
We introduce a new approach based on augmenting Monte Carlo methods with SurVAE Flows to sample from discrete distributions.
We demonstrate the efficacy of our algorithm on a range of examples from statistics, computational physics and machine learning, and observe improvements compared to alternative algorithms.
arXiv Detail & Related papers (2021-02-04T02:21:08Z) - Pathwise Conditioning of Gaussian Processes [72.61885354624604]
Conventional approaches for simulating Gaussian process posteriors view samples as draws from marginal distributions of process values at finite sets of input locations.
This distribution-centric characterization leads to generative strategies that scale cubically in the size of the desired random vector.
We show how this pathwise interpretation of conditioning gives rise to a general family of approximations that lend themselves to efficiently sampling Gaussian process posteriors.
arXiv Detail & Related papers (2020-11-08T17:09:37Z) - Efficiently Sampling Functions from Gaussian Process Posteriors [76.94808614373609]
We propose an easy-to-use and general-purpose approach for fast posterior sampling.
We demonstrate how decoupled sample paths accurately represent Gaussian process posteriors at a fraction of the usual cost.
arXiv Detail & Related papers (2020-02-21T14:03:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.