Primal Dual Interpretation of the Proximal Stochastic Gradient Langevin
Algorithm
- URL: http://arxiv.org/abs/2006.09270v2
- Date: Mon, 22 Feb 2021 15:12:16 GMT
- Title: Primal Dual Interpretation of the Proximal Stochastic Gradient Langevin
Algorithm
- Authors: Adil Salim and Peter Richt\'arik
- Abstract summary: We consider the task of sampling with respect to a log concave probability distribution.
The target distribution can be seen as a minimizer of the Kullback-Leibler divergence defined on the Wasserstein space.
We show that if the potential is strongly convex, the complexity of PSGLA is $O (1/varepsilon2)$ in terms of the 2-Wasserstein distance.
- Score: 11.80267432402723
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the task of sampling with respect to a log concave probability
distribution. The potential of the target distribution is assumed to be
composite, \textit{i.e.}, written as the sum of a smooth convex term, and a
nonsmooth convex term possibly taking infinite values. The target distribution
can be seen as a minimizer of the Kullback-Leibler divergence defined on the
Wasserstein space (\textit{i.e.}, the space of probability measures). In the
first part of this paper, we establish a strong duality result for this
minimization problem. In the second part of this paper, we use the duality gap
arising from the first part to study the complexity of the Proximal Stochastic
Gradient Langevin Algorithm (PSGLA), which can be seen as a generalization of
the Projected Langevin Algorithm. Our approach relies on viewing PSGLA as a
primal dual algorithm and covers many cases where the target distribution is
not fully supported. In particular, we show that if the potential is strongly
convex, the complexity of PSGLA is $O(1/\varepsilon^2)$ in terms of the
2-Wasserstein distance. In contrast, the complexity of the Projected Langevin
Algorithm is $O(1/\varepsilon^{12})$ in terms of total variation when the
potential is convex.
Related papers
- Obtaining Lower Query Complexities through Lightweight Zeroth-Order Proximal Gradient Algorithms [65.42376001308064]
We propose two variance reduced ZO estimators for complex gradient problems.
We improve the state-of-the-art function complexities from $mathcalOleft(minfracdn1/2epsilon2, fracdepsilon3right)$ to $tildecalOleft(fracdepsilon2right)$.
arXiv Detail & Related papers (2024-10-03T15:04:01Z) - Faster Sampling via Stochastic Gradient Proximal Sampler [28.422547264326468]
Proximal samplers (SPS) for sampling from non-log-concave distributions are studied.
We show that the convergence to the target distribution can be guaranteed as long as the algorithm trajectory is bounded.
We provide two implementable variants based on Langevin dynamics (SGLD) and Langevin-MALA, giving rise to SPS-SGLD and SPS-MALA.
arXiv Detail & Related papers (2024-05-27T00:53:18Z) - Projection by Convolution: Optimal Sample Complexity for Reinforcement Learning in Continuous-Space MDPs [56.237917407785545]
We consider the problem of learning an $varepsilon$-optimal policy in a general class of continuous-space Markov decision processes (MDPs) having smooth Bellman operators.
Key to our solution is a novel projection technique based on ideas from harmonic analysis.
Our result bridges the gap between two popular but conflicting perspectives on continuous-space MDPs.
arXiv Detail & Related papers (2024-05-10T09:58:47Z) - Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - Optimal Horizon-Free Reward-Free Exploration for Linear Mixture MDPs [60.40452803295326]
We propose a new reward-free algorithm for learning linear Markov decision processes (MDPs)
At the core of our algorithm is uncertainty-weighted value-targeted regression with exploration-driven pseudo-reward.
We show that our algorithm only needs to explore $tilde O( d2varepsilon-2)$ episodes to find an $varepsilon$-optimal policy.
arXiv Detail & Related papers (2023-03-17T17:53:28Z) - Projected Stochastic Gradient Langevin Algorithms for Constrained
Sampling and Non-Convex Learning [0.0]
Langevin algorithms are methods with additive noise.
Langevin algorithms have been used for decades in chain Carlo (Milon)
For learning, we show that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that
arXiv Detail & Related papers (2020-12-22T16:19:20Z) - Complexity of zigzag sampling algorithm for strongly log-concave
distributions [6.336005544376984]
We study the computational complexity of zigzag sampling algorithm for strongly log-concave distributions.
We prove that the zigzag sampling algorithm achieves $varepsilon$ error in chi-square divergence with a computational cost equivalent to $Obigl.
arXiv Detail & Related papers (2020-12-21T03:10:21Z) - Riemannian Langevin Algorithm for Solving Semidefinite Programs [9.340611077939828]
We propose a Langevin-based algorithm for non- optimization and sampling on a product manifold of spheres.
We show that the proposed Langevin algorithm achieves $epsilon accuracy with high probability.
arXiv Detail & Related papers (2020-10-21T17:51:08Z) - Convergence of Langevin Monte Carlo in Chi-Squared and Renyi Divergence [8.873449722727026]
We show that the rate estimate $widetildemathcalO(depsilon-1)$ improves the previously known rates in both of these metrics.
In particular, for convex and firstorder smooth potentials, we show that LMC algorithm achieves the rate estimate $widetildemathcalO(depsilon-1)$ which improves the previously known rates in both of these metrics.
arXiv Detail & Related papers (2020-07-22T18:18:28Z) - Learning Halfspaces with Tsybakov Noise [50.659479930171585]
We study the learnability of halfspaces in the presence of Tsybakov noise.
We give an algorithm that achieves misclassification error $epsilon$ with respect to the true halfspace.
arXiv Detail & Related papers (2020-06-11T14:25:02Z) - Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and
Variance Reduction [63.41789556777387]
Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP)
We show that the number of samples needed to yield an entrywise $varepsilon$-accurate estimate of the Q-function is at most on the order of $frac1mu_min (1-gamma)5varepsilon2+ fract_mixmu_min (1-gamma)$ up to some logarithmic factor.
arXiv Detail & Related papers (2020-06-04T17:51:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.