The Picard-Lagrange Framework for Higher-Order Langevin Monte Carlo
- URL: http://arxiv.org/abs/2510.18242v1
- Date: Tue, 21 Oct 2025 03:04:58 GMT
- Title: The Picard-Lagrange Framework for Higher-Order Langevin Monte Carlo
- Authors: Jaideep Mahajan, Kaihong Zhang, Feng Liang, Jingbo Liu,
- Abstract summary: We introduce a new sampling algorithm built on a general $K$th-order Langevin dynamics, extending beyond second- and third-order methods.<n>For targets with smooth, strongly log-concave densities, we prove dimension-dependent convergence in Wasserstein distance.<n>This is the first sampling algorithm achieving such query complexity.
- Score: 15.440889897519483
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sampling from log-concave distributions is a central problem in statistics and machine learning. Prior work establishes theoretical guarantees for Langevin Monte Carlo algorithm based on overdamped and underdamped Langevin dynamics and, more recently, some third-order variants. In this paper, we introduce a new sampling algorithm built on a general $K$th-order Langevin dynamics, extending beyond second- and third-order methods. To discretize the $K$th-order dynamics, we approximate the drift induced by the potential via Lagrange interpolation and refine the node values at the interpolation points using Picard-iteration corrections, yielding a flexible scheme that fully utilizes the acceleration of higher-order Langevin dynamics. For targets with smooth, strongly log-concave densities, we prove dimension-dependent convergence in Wasserstein distance: the sampler achieves $\varepsilon$-accuracy within $\widetilde O(d^{\frac{K-1}{2K-3}}\varepsilon^{-\frac{2}{2K-3}})$ gradient evaluations for $K \ge 3$. To our best knowledge, this is the first sampling algorithm achieving such query complexity. The rate improves with the order $K$ increases, yielding better rates than existing first to third-order approaches.
Related papers
- High-Order Langevin Monte Carlo Algorithms [3.4106874887901437]
Langevin algorithms are popular Markov chain Monte Carlo (MCMC) methods for large-scale sampling problems.<n>We propose $P$-th order Langevin Monte Carlo (LMC) algorithms based on the discretizations of $P$-th order Langevin dynamics.<n>We obtain Wasserstein convergence guarantees for sampling from distributions with log-concave and smooth densities.
arXiv Detail & Related papers (2025-08-24T22:37:44Z) - Underdamped Langevin MCMC with third order convergence [1.8374319565577153]
We present a new numerical method for the underdamped Langevin diffusion (ULD)<n>Under the assumptions that the gradient and Hessian of $f$ are Lipschitz continuous, our algorithm achieves a 2-Wasserstein error of $varepsilon$ in $mathcalO(sqrtd/varepsilon)$ steps.<n>This is the first gradient-only method for ULD with third order convergence.
arXiv Detail & Related papers (2025-08-22T16:00:01Z) - Mirror Descent Algorithms with Nearly Dimension-Independent Rates for
Differentially-Private Stochastic Saddle-Point Problems [6.431793114484429]
We propose $sqrtlog(d)/sqrtn + log(d)/[nvarepsilon]2/5$ to solve the problem of differentially-private saddle-points in the polyhedral setting.
We show that our algorithms attain a rate of $sqrtlog(d)/sqrtn + log(d)/[nvarepsilon]2/5$ with constant success.
arXiv Detail & Related papers (2024-03-05T12:28:00Z) - ReSQueing Parallel and Private Stochastic Convex Optimization [59.53297063174519]
We introduce a new tool for BFG convex optimization (SCO): a Reweighted Query (ReSQue) estimator for the gradient of a function convolved with a (Gaussian) probability density.
We develop algorithms achieving state-of-the-art complexities for SCO in parallel and private settings.
arXiv Detail & Related papers (2023-01-01T18:51:29Z) - Explicit Second-Order Min-Max Optimization: Practical Algorithms and Complexity Analysis [71.05708939639537]
We propose and analyze several inexact regularized Newton-type methods for finding a global saddle point of emphconcave unconstrained problems.<n>Our method improves the existing line-search-based min-max optimization by shaving off an $O(loglog(1/eps)$ factor in the required number of Schur decompositions.
arXiv Detail & Related papers (2022-10-23T21:24:37Z) - Quantum Algorithms for Sampling Log-Concave Distributions and Estimating
Normalizing Constants [8.453228628258778]
We develop quantum algorithms for sampling logconcave distributions and for estimating their normalizing constants.
We exploit quantum analogs of the Monte Carlo method and quantum walks.
We also prove a $1/epsilon1-o(1)$ quantum lower bound for estimating normalizing constants.
arXiv Detail & Related papers (2022-10-12T19:10:43Z) - Improved Convergence Rate of Stochastic Gradient Langevin Dynamics with
Variance Reduction and its Application to Optimization [50.83356836818667]
gradient Langevin Dynamics is one of the most fundamental algorithms to solve non-eps optimization problems.
In this paper, we show two variants of this kind, namely the Variance Reduced Langevin Dynamics and the Recursive Gradient Langevin Dynamics.
arXiv Detail & Related papers (2022-03-30T11:39:00Z) - Mean-Square Analysis with An Application to Optimal Dimension Dependence
of Langevin Monte Carlo [60.785586069299356]
This work provides a general framework for the non-asymotic analysis of sampling error in 2-Wasserstein distance.
Our theoretical analysis is further validated by numerical experiments.
arXiv Detail & Related papers (2021-09-08T18:00:05Z) - Faster Convergence of Stochastic Gradient Langevin Dynamics for
Non-Log-Concave Sampling [110.88857917726276]
We provide a new convergence analysis of gradient Langevin dynamics (SGLD) for sampling from a class of distributions that can be non-log-concave.
At the core of our approach is a novel conductance analysis of SGLD using an auxiliary time-reversible Markov Chain.
arXiv Detail & Related papers (2020-10-19T15:23:18Z) - Convergence of Langevin Monte Carlo in Chi-Squared and Renyi Divergence [8.873449722727026]
We show that the rate estimate $widetildemathcalO(depsilon-1)$ improves the previously known rates in both of these metrics.
In particular, for convex and firstorder smooth potentials, we show that LMC algorithm achieves the rate estimate $widetildemathcalO(depsilon-1)$ which improves the previously known rates in both of these metrics.
arXiv Detail & Related papers (2020-07-22T18:18:28Z) - Second-Order Information in Non-Convex Stochastic Optimization: Power
and Limitations [54.42518331209581]
We find an algorithm which finds.
epsilon$-approximate stationary point (with $|nabla F(x)|le epsilon$) using.
$(epsilon,gamma)$surimate random random points.
Our lower bounds here are novel even in the noiseless case.
arXiv Detail & Related papers (2020-06-24T04:41:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.