The query complexity of sampling from strongly log-concave distributions
in one dimension
- URL: http://arxiv.org/abs/2105.14163v1
- Date: Sat, 29 May 2021 00:51:17 GMT
- Title: The query complexity of sampling from strongly log-concave distributions
in one dimension
- Authors: Sinho Chewi, Patrik Gerber, Chen Lu, Thibaut Le Gouic, Philippe
Rigollet
- Abstract summary: We establish the first tight lower bound of $Omega(loglogkappa)$ on the query of sampling from the class of strongly log-concave and log-smooth distributions.
We introduce a novel algorithm based on rejection that closes this exponential gap.
- Score: 14.18847457501901
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We establish the first tight lower bound of $\Omega(\log\log\kappa)$ on the
query complexity of sampling from the class of strongly log-concave and
log-smooth distributions with condition number $\kappa$ in one dimension.
Whereas existing guarantees for MCMC-based algorithms scale polynomially in
$\kappa$, we introduce a novel algorithm based on rejection sampling that
closes this doubly exponential gap.
Related papers
- Projection by Convolution: Optimal Sample Complexity for Reinforcement Learning in Continuous-Space MDPs [56.237917407785545]
We consider the problem of learning an $varepsilon$-optimal policy in a general class of continuous-space Markov decision processes (MDPs) having smooth Bellman operators.
Key to our solution is a novel projection technique based on ideas from harmonic analysis.
Our result bridges the gap between two popular but conflicting perspectives on continuous-space MDPs.
arXiv Detail & Related papers (2024-05-10T09:58:47Z) - Query lower bounds for log-concave sampling [22.446060015399905]
We prove lower bounds for sampling from strongly log-concave and log-smooth distributions in dimension $dge 2$.
Our proofs rely upon (1) a multiscale construction inspired by work on the Kakeya conjecture in geometric measure theory, and (2) a novel reduction that demonstrates that algorithmic block Krylov algorithms are optimal for this problem.
arXiv Detail & Related papers (2023-04-05T17:10:53Z) - Optimal Horizon-Free Reward-Free Exploration for Linear Mixture MDPs [60.40452803295326]
We propose a new reward-free algorithm for learning linear Markov decision processes (MDPs)
At the core of our algorithm is uncertainty-weighted value-targeted regression with exploration-driven pseudo-reward.
We show that our algorithm only needs to explore $tilde O( d2varepsilon-2)$ episodes to find an $varepsilon$-optimal policy.
arXiv Detail & Related papers (2023-03-17T17:53:28Z) - Improved dimension dependence of a proximal algorithm for sampling [16.147290924171692]
We propose a sampling algorithm that achieves superior complexity bounds in all the classical settings.
Our algorithm is based on the proximal sampler introduced incitetlee 2021.
For strongly log-concave distributions, our method has complexity bound $tildemathcalO(kappa d1/2)$ without warm start.
For distributions satisfying the LSI, our bound is $tilde mathcalO(hat kappa d1/2)$ where $hat kappa$ is the ratio between smoothness and
arXiv Detail & Related papers (2023-02-20T16:44:48Z) - Stochastic Approximation Approaches to Group Distributionally Robust
Optimization [96.26317627118912]
Group distributionally robust optimization (GDRO)
Online learning techniques to reduce the number of samples required in each round from $m$ to $1$, keeping the same sample.
A novel formulation of weighted GDRO, which allows us to derive distribution-dependent convergence rates.
arXiv Detail & Related papers (2023-02-18T09:24:15Z) - Towards Sample-Optimal Compressive Phase Retrieval with Sparse and
Generative Priors [59.33977545294148]
We show that $O(k log L)$ samples suffice to guarantee that the signal is close to any vector that minimizes an amplitude-based empirical loss function.
We adapt this result to sparse phase retrieval, and show that $O(s log n)$ samples are sufficient for a similar guarantee when the underlying signal is $s$-sparse and $n$-dimensional.
arXiv Detail & Related papers (2021-06-29T12:49:54Z) - Rejection sampling from shape-constrained distributions in sublinear
time [14.18847457501901]
We study the query complexity of rejection sampling in a minimax framework for various classes of discrete distributions.
Our results provide new algorithms for sampling whose complexity scales sublinearly with the alphabet size.
arXiv Detail & Related papers (2021-05-29T01:00:42Z) - Complexity of zigzag sampling algorithm for strongly log-concave
distributions [6.336005544376984]
We study the computational complexity of zigzag sampling algorithm for strongly log-concave distributions.
We prove that the zigzag sampling algorithm achieves $varepsilon$ error in chi-square divergence with a computational cost equivalent to $Obigl.
arXiv Detail & Related papers (2020-12-21T03:10:21Z) - Revisiting the Sample Complexity of Sparse Spectrum Approximation of
Gaussian Processes [60.479499225746295]
We introduce a new scalable approximation for Gaussian processes with provable guarantees which hold simultaneously over its entire parameter space.
Our approximation is obtained from an improved sample complexity analysis for sparse spectrum Gaussian processes (SSGPs)
arXiv Detail & Related papers (2020-11-17T05:41:50Z) - Breaking the Sample Size Barrier in Model-Based Reinforcement Learning
with a Generative Model [50.38446482252857]
This paper is concerned with the sample efficiency of reinforcement learning, assuming access to a generative model (or simulator)
We first consider $gamma$-discounted infinite-horizon Markov decision processes (MDPs) with state space $mathcalS$ and action space $mathcalA$.
We prove that a plain model-based planning algorithm suffices to achieve minimax-optimal sample complexity given any target accuracy level.
arXiv Detail & Related papers (2020-05-26T17:53:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.