Proximal Algorithms for Accelerated Langevin Dynamics
- URL: http://arxiv.org/abs/2311.14829v2
- Date: Tue, 28 Nov 2023 15:27:26 GMT
- Title: Proximal Algorithms for Accelerated Langevin Dynamics
- Authors: Duy H. Thai, Alexander L. Young, David B. Dunson
- Abstract summary: We develop a novel class of MCMC algorithms based on a stochastized Nesterov scheme.
We show superior performance of the proposed method over typical Langevin samplers for different models in statistics and image processing.
- Score: 57.08271964961975
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We develop a novel class of MCMC algorithms based on a stochastized Nesterov
scheme. With an appropriate addition of noise, the result is a
time-inhomogeneous underdamped Langevin equation, which we prove emits a
specified target distribution as its invariant measure. Convergence rates to
stationarity under Wasserstein-2 distance are established as well.
Metropolis-adjusted and stochastic gradient versions of the proposed Langevin
dynamics are also provided. Experimental illustrations show superior
performance of the proposed method over typical Langevin samplers for different
models in statistics and image processing including better mixing of the
resulting Markov chains.
Related papers
- High-accuracy sampling from constrained spaces with the Metropolis-adjusted Preconditioned Langevin Algorithm [12.405427902037971]
We propose a first-order sampling method for approximate sampling from a target distribution whose support is a proper convex subset of $mathbbRd$.
Our proposed method is the result of applying a Metropolis-Hastings filter to the Markov chain formed by a single step of the preconditioned Langevin algorithm.
arXiv Detail & Related papers (2024-12-24T23:21:23Z) - Sampling from Boltzmann densities with physics informed low-rank formats [1.1650821883155187]
Our method proposes the efficient generation of samples from an unnormalized Boltzmann density by solving the underlying continuity equation in the low-rank tensor train (TT) format.
Inspired by Sequential Monte Carlo, we alternate between deterministic time steps from the TT representation of the flow field and steps, which include Langevin and resampling steps.
arXiv Detail & Related papers (2024-12-10T16:17:03Z) - AdamMCMC: Combining Metropolis Adjusted Langevin with Momentum-based Optimization [0.0]
Uncertainty estimation is a key issue when considering the application of deep neural network methods in science and engineering.
We introduce a novel algorithm that quantifies uncertainty via Monte Carlo sampling from a tempered posterior distribution.
arXiv Detail & Related papers (2023-12-21T16:58:49Z) - Symmetric Mean-field Langevin Dynamics for Distributional Minimax
Problems [78.96969465641024]
We extend mean-field Langevin dynamics to minimax optimization over probability distributions for the first time with symmetric and provably convergent updates.
We also study time and particle discretization regimes and prove a new uniform-in-time propagation of chaos result.
arXiv Detail & Related papers (2023-12-02T13:01:29Z) - Noise-Free Sampling Algorithms via Regularized Wasserstein Proximals [3.4240632942024685]
We consider the problem of sampling from a distribution governed by a potential function.
This work proposes an explicit score based MCMC method that is deterministic, resulting in a deterministic evolution for particles.
arXiv Detail & Related papers (2023-08-28T23:51:33Z) - Differentiating Metropolis-Hastings to Optimize Intractable Densities [51.16801956665228]
We develop an algorithm for automatic differentiation of Metropolis-Hastings samplers.
We apply gradient-based optimization to objectives expressed as expectations over intractable target densities.
arXiv Detail & Related papers (2023-06-13T17:56:02Z) - Score-Guided Intermediate Layer Optimization: Fast Langevin Mixing for
Inverse Problem [97.64313409741614]
We prove fast mixing and characterize the stationary distribution of the Langevin Algorithm for inverting random weighted DNN generators.
We propose to do posterior sampling in the latent space of a pre-trained generative model.
arXiv Detail & Related papers (2022-06-18T03:47:37Z) - Hessian-Free High-Resolution Nesterov Acceleration for Sampling [55.498092486970364]
Nesterov's Accelerated Gradient (NAG) for optimization has better performance than its continuous time limit (noiseless kinetic Langevin) when a finite step-size is employed.
This work explores the sampling counterpart of this phenonemon and proposes a diffusion process, whose discretizations can yield accelerated gradient-based MCMC methods.
arXiv Detail & Related papers (2020-06-16T15:07:37Z) - Stochastic Optimization with Heavy-Tailed Noise via Accelerated Gradient
Clipping [69.9674326582747]
We propose a new accelerated first-order method called clipped-SSTM for smooth convex optimization with heavy-tailed distributed noise in gradients.
We prove new complexity that outperform state-of-the-art results in this case.
We derive the first non-trivial high-probability complexity bounds for SGD with clipping without light-tails assumption on the noise.
arXiv Detail & Related papers (2020-05-21T17:05:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.