Anchored Langevin Algorithms
- URL: http://arxiv.org/abs/2509.19455v1
- Date: Tue, 23 Sep 2025 18:11:55 GMT
- Title: Anchored Langevin Algorithms
- Authors: Mert Gurbuzbalaban, Hoang M. Nguyen, Xicheng Zhang, Lingjiong Zhu,
- Abstract summary: We propose anchored Langevin dynamics, a unified approach that accommodates non-differentiable targets and certain classes of heavy-tailed distributions.<n>We establish non-asymptotic guarantees in the 2-Wasserstein distance to the target distribution and provide an equivalent formulation derived via a random time change of the Langevin diffusion.
- Score: 4.0869042100704895
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Standard first-order Langevin algorithms such as the unadjusted Langevin algorithm (ULA) are obtained by discretizing the Langevin diffusion and are widely used for sampling in machine learning because they scale to high dimensions and large datasets. However, they face two key limitations: (i) they require differentiable log-densities, excluding targets with non-differentiable components; and (ii) they generally fail to sample heavy-tailed targets. We propose anchored Langevin dynamics, a unified approach that accommodates non-differentiable targets and certain classes of heavy-tailed distributions. The method replaces the original potential with a smooth reference potential and modifies the Langevin diffusion via multiplicative scaling. We establish non-asymptotic guarantees in the 2-Wasserstein distance to the target distribution and provide an equivalent formulation derived via a random time change of the Langevin diffusion. We provide numerical experiments to illustrate the theory and practical performance of our proposed approach.
Related papers
- Dimension-Free Multimodal Sampling via Preconditioned Annealed Langevin Dynamics [10.631439631816166]
Annealed Langevin dynamics (ALD) is a widely used alternative to classical Langevin since it often yields much faster mixing on multimodal targets.<n>We help bridge this gap by providing a uniform-in-dimension analysis of continuous-time ALD for multimodal targets.
arXiv Detail & Related papers (2026-02-01T21:37:33Z) - An Improved Analysis of Langevin Algorithms with Prior Diffusion for
Non-Log-Concave Sampling [27.882407333690267]
We show that modified Langevin algorithm with prior diffusion can converge dimension independently for strongly log-concave target distributions.
We also prove that the modified Langevin algorithm can also obtain the dimension-independent convergence of KL divergence with different step size schedules.
arXiv Detail & Related papers (2024-03-10T11:50:34Z) - Broadening Target Distributions for Accelerated Diffusion Models via a Novel Analysis Approach [49.97755400231656]
We show that a new accelerated DDPM sampler achieves accelerated performance for three broad distribution classes not considered before.<n>Our results show an improved dependency on the data dimension $d$ among accelerated DDPM type samplers.
arXiv Detail & Related papers (2024-02-21T16:11:47Z) - Sampling and estimation on manifolds using the Langevin diffusion [45.57801520690309]
Two estimators of linear functionals of $mu_phi $ based on the discretized Markov process are considered.<n>Error bounds are derived for sampling and estimation using a discretization of an intrinsically defined Langevin diffusion.
arXiv Detail & Related papers (2023-12-22T18:01:11Z) - Symmetric Mean-field Langevin Dynamics for Distributional Minimax
Problems [78.96969465641024]
We extend mean-field Langevin dynamics to minimax optimization over probability distributions for the first time with symmetric and provably convergent updates.
We also study time and particle discretization regimes and prove a new uniform-in-time propagation of chaos result.
arXiv Detail & Related papers (2023-12-02T13:01:29Z) - Proximal Algorithms for Accelerated Langevin Dynamics [57.08271964961975]
We develop a novel class of MCMC algorithms based on a stochastized Nesterov scheme.
We show superior performance of the proposed method over typical Langevin samplers for different models in statistics and image processing.
arXiv Detail & Related papers (2023-11-24T19:56:01Z) - Scaling Riemannian Diffusion Models [68.52820280448991]
We show that our method enables us to scale to high dimensional tasks on nontrivial manifold.
We model QCD densities on $SU(n)$ lattices and contrastively learned embeddings on high dimensional hyperspheres.
arXiv Detail & Related papers (2023-10-30T21:27:53Z) - Accelerated Bayesian imaging by relaxed proximal-point Langevin sampling [4.848683707039751]
This paper presents a new accelerated proximal Markov chain Monte Carlo methodology to perform Bayesian inference in imaging inverse problems.
For models that are smooth or regularised by Moreau-Yosida smoothing, the midpoint is equivalent to an implicit discretisation of an overdamped Langevin diffusion.
For targets that are $kappa$-strongly log-concave, the provided non-asymptotic convergence analysis also identifies the optimal time step.
arXiv Detail & Related papers (2023-08-18T10:55:49Z) - Langevin Monte Carlo for strongly log-concave distributions: Randomized
midpoint revisited [4.551456632596834]
We conduct an analysis of the midpoint discretization for the vanilla Langevin process.
This analysis helps to clarify the underlying principles and provides valuable insights.
We establish new guarantees for the kinetic Langevin process with Euler discretization.
arXiv Detail & Related papers (2023-06-14T13:18:09Z) - Transport map unadjusted Langevin algorithms: learning and discretizing perturbed samplers [1.993607565985189]
We study the use of transport maps that normalize a target distribution as a way to precondition and accelerate the convergence of Langevin dynamics.
We also show that applying a transport map to an irreversibly-perturbed ULA results in a geometry-informed irreversible perturbation (GiIrr) of the original dynamics.
arXiv Detail & Related papers (2023-02-14T18:13:19Z) - Improved Convergence Rate of Stochastic Gradient Langevin Dynamics with
Variance Reduction and its Application to Optimization [50.83356836818667]
gradient Langevin Dynamics is one of the most fundamental algorithms to solve non-eps optimization problems.
In this paper, we show two variants of this kind, namely the Variance Reduced Langevin Dynamics and the Recursive Gradient Langevin Dynamics.
arXiv Detail & Related papers (2022-03-30T11:39:00Z) - On the Benefits of Large Learning Rates for Kernel Methods [110.03020563291788]
We show that a phenomenon can be precisely characterized in the context of kernel methods.
We consider the minimization of a quadratic objective in a separable Hilbert space, and show that with early stopping, the choice of learning rate influences the spectral decomposition of the obtained solution.
arXiv Detail & Related papers (2022-02-28T13:01:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.