A Proximal Algorithm for Sampling
- URL: http://arxiv.org/abs/2202.13975v3
- Date: Fri, 30 Jun 2023 23:06:46 GMT
- Title: A Proximal Algorithm for Sampling
- Authors: Jiaming Liang, Yongxin Chen
- Abstract summary: We study problems associated with potentials that lack smoothness.
The potentials can be either convex or non-smooth.
Our algorithm is based on a special case of rejection sampling.
- Score: 14.909442791255042
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study sampling problems associated with potentials that lack smoothness.
The potentials can be either convex or non-convex. Departing from the standard
smooth setting, the potentials are only assumed to be weakly smooth or
non-smooth, or the summation of multiple such functions. We develop a sampling
algorithm that resembles proximal algorithms in optimization for this
challenging sampling task. Our algorithm is based on a special case of Gibbs
sampling known as the alternating sampling framework (ASF). The key
contribution of this work is a practical realization of the ASF based on
rejection sampling for both non-convex and convex potentials that are not
necessarily smooth. In almost all the cases of sampling considered in this
work, our proximal sampling algorithm achieves better complexity than all
existing methods.
Related papers
- Proximal Oracles for Optimization and Sampling [18.77973093341588]
We consider convex optimization with non-smooth objective function and log-concave sampling with non-smooth potential.
To overcome the challenges caused by non-smoothness, our algorithms employ two powerful proximal frameworks in optimization and sampling.
arXiv Detail & Related papers (2024-04-02T18:52:28Z) - Sample Complexity for Quadratic Bandits: Hessian Dependent Bounds and
Optimal Algorithms [64.10576998630981]
We show the first tight characterization of the optimal Hessian-dependent sample complexity.
A Hessian-independent algorithm universally achieves the optimal sample complexities for all Hessian instances.
The optimal sample complexities achieved by our algorithm remain valid for heavy-tailed noise distributions.
arXiv Detail & Related papers (2023-06-21T17:03:22Z) - Learning Rate Free Sampling in Constrained Domains [21.853333421463603]
We introduce a suite of new particle-based algorithms for sampling in constrained domains which are entirely learning rate free.
We demonstrate the performance of our algorithms on a range of numerical examples, including sampling from targets on the simplex.
arXiv Detail & Related papers (2023-05-24T09:31:18Z) - A Proximal Algorithm for Sampling from Non-convex Potentials [14.909442791255042]
We consider problems with non-smooth potentials that lack smoothness.
Rather than smooth, the potentials are assumed to be semi-smooth or multiple multiplesmooth functions.
We develop a special case Gibbs sampling known as the alternating sampling framework.
arXiv Detail & Related papers (2022-05-20T13:58:46Z) - A Proximal Algorithm for Sampling from Non-smooth Potentials [10.980294435643398]
We propose a novel MCMC algorithm for sampling from non-smooth potentials.
Our method is based on the proximal bundle method and an alternating sampling framework.
One key contribution of this work is a fast algorithm that realizes the restricted Gaussian oracle for any convex non-smooth potential.
arXiv Detail & Related papers (2021-10-09T15:26:07Z) - Towards Sample-Optimal Compressive Phase Retrieval with Sparse and
Generative Priors [59.33977545294148]
We show that $O(k log L)$ samples suffice to guarantee that the signal is close to any vector that minimizes an amplitude-based empirical loss function.
We adapt this result to sparse phase retrieval, and show that $O(s log n)$ samples are sufficient for a similar guarantee when the underlying signal is $s$-sparse and $n$-dimensional.
arXiv Detail & Related papers (2021-06-29T12:49:54Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Randomized Stochastic Variance-Reduced Methods for Stochastic Bilevel
Optimization [62.87181271021217]
We consider non-SBO problems that have many applications in machine learning.
This paper proposes fast randomized algorithms for non-SBO problems.
arXiv Detail & Related papers (2021-05-05T18:28:42Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Active Model Estimation in Markov Decision Processes [108.46146218973189]
We study the problem of efficient exploration in order to learn an accurate model of an environment, modeled as a Markov decision process (MDP)
We show that our Markov-based algorithm outperforms both our original algorithm and the maximum entropy algorithm in the small sample regime.
arXiv Detail & Related papers (2020-03-06T16:17:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.