A Proximal Algorithm for Sampling from Non-smooth Potentials
- URL: http://arxiv.org/abs/2110.04597v1
- Date: Sat, 9 Oct 2021 15:26:07 GMT
- Title: A Proximal Algorithm for Sampling from Non-smooth Potentials
- Authors: Jiaming Liang, Yongxin Chen
- Abstract summary: We propose a novel MCMC algorithm for sampling from non-smooth potentials.
Our method is based on the proximal bundle method and an alternating sampling framework.
One key contribution of this work is a fast algorithm that realizes the restricted Gaussian oracle for any convex non-smooth potential.
- Score: 10.980294435643398
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Markov chain Monte Carlo (MCMC) is an effective and dominant method to sample
from high-dimensional complex distributions. Yet, most existing MCMC methods
are only applicable to settings with smooth potentials (log-densities). In this
work, we examine sampling problems with non-smooth potentials. We propose a
novel MCMC algorithm for sampling from non-smooth potentials. We provide a
non-asymptotical analysis of our algorithm and establish a polynomial-time
complexity $\tilde {\cal O}(d\varepsilon^{-1})$ to obtain $\varepsilon$ total
variation distance to the target density, better than all existing results
under the same assumptions. Our method is based on the proximal bundle method
and an alternating sampling framework. This framework requires the so-called
restricted Gaussian oracle, which can be viewed as a sampling counterpart of
the proximal mapping in convex optimization. One key contribution of this work
is a fast algorithm that realizes the restricted Gaussian oracle for any convex
non-smooth potential with bounded Lipschitz constant.
Related papers
- Harmonic Path Integral Diffusion [0.4527270266697462]
We present a novel approach for sampling from a continuous multivariate probability distribution, which may either be explicitly known (up to a normalization factor) or represented via empirical samples.
Our method constructs a time-dependent bridge from a delta function centered at the origin of the state space at $t=0$, transforming it into the target distribution at $t=1$.
We contrast these algorithms with other sampling methods, particularly simulated and path integral sampling, highlighting their advantages in terms of analytical control, accuracy, and computational efficiency.
arXiv Detail & Related papers (2024-09-23T16:20:21Z) - A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - Faster Sampling via Stochastic Gradient Proximal Sampler [28.422547264326468]
Proximal samplers (SPS) for sampling from non-log-concave distributions are studied.
We show that the convergence to the target distribution can be guaranteed as long as the algorithm trajectory is bounded.
We provide two implementable variants based on Langevin dynamics (SGLD) and Langevin-MALA, giving rise to SPS-SGLD and SPS-MALA.
arXiv Detail & Related papers (2024-05-27T00:53:18Z) - Proximal Oracles for Optimization and Sampling [18.77973093341588]
We consider convex optimization with non-smooth objective function and log-concave sampling with non-smooth potential.
To overcome the challenges caused by non-smoothness, our algorithms employ two powerful proximal frameworks in optimization and sampling.
arXiv Detail & Related papers (2024-04-02T18:52:28Z) - Sample Complexity for Quadratic Bandits: Hessian Dependent Bounds and
Optimal Algorithms [64.10576998630981]
We show the first tight characterization of the optimal Hessian-dependent sample complexity.
A Hessian-independent algorithm universally achieves the optimal sample complexities for all Hessian instances.
The optimal sample complexities achieved by our algorithm remain valid for heavy-tailed noise distributions.
arXiv Detail & Related papers (2023-06-21T17:03:22Z) - Stochastic Inexact Augmented Lagrangian Method for Nonconvex Expectation
Constrained Optimization [88.0031283949404]
Many real-world problems have complicated non functional constraints and use a large number of data points.
Our proposed method outperforms an existing method with the previously best-known result.
arXiv Detail & Related papers (2022-12-19T14:48:54Z) - A Proximal Algorithm for Sampling from Non-convex Potentials [14.909442791255042]
We consider problems with non-smooth potentials that lack smoothness.
Rather than smooth, the potentials are assumed to be semi-smooth or multiple multiplesmooth functions.
We develop a special case Gibbs sampling known as the alternating sampling framework.
arXiv Detail & Related papers (2022-05-20T13:58:46Z) - A Proximal Algorithm for Sampling [14.909442791255042]
We study problems associated with potentials that lack smoothness.
The potentials can be either convex or non-smooth.
Our algorithm is based on a special case of rejection sampling.
arXiv Detail & Related papers (2022-02-28T17:26:09Z) - Sampling in Combinatorial Spaces with SurVAE Flow Augmented MCMC [83.48593305367523]
Hybrid Monte Carlo is a powerful Markov Chain Monte Carlo method for sampling from complex continuous distributions.
We introduce a new approach based on augmenting Monte Carlo methods with SurVAE Flows to sample from discrete distributions.
We demonstrate the efficacy of our algorithm on a range of examples from statistics, computational physics and machine learning, and observe improvements compared to alternative algorithms.
arXiv Detail & Related papers (2021-02-04T02:21:08Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Stochastic Saddle-Point Optimization for Wasserstein Barycenters [69.68068088508505]
We consider the populationimation barycenter problem for random probability measures supported on a finite set of points and generated by an online stream of data.
We employ the structure of the problem and obtain a convex-concave saddle-point reformulation of this problem.
In the setting when the distribution of random probability measures is discrete, we propose an optimization algorithm and estimate its complexity.
arXiv Detail & Related papers (2020-06-11T19:40:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.