Query lower bounds for log-concave sampling
- URL: http://arxiv.org/abs/2304.02599v2
- Date: Mon, 30 Oct 2023 03:01:43 GMT
- Title: Query lower bounds for log-concave sampling
- Authors: Sinho Chewi, Jaume de Dios Pont, Jerry Li, Chen Lu, Shyam Narayanan
- Abstract summary: We prove lower bounds for sampling from strongly log-concave and log-smooth distributions in dimension $dge 2$.
Our proofs rely upon (1) a multiscale construction inspired by work on the Kakeya conjecture in geometric measure theory, and (2) a novel reduction that demonstrates that algorithmic block Krylov algorithms are optimal for this problem.
- Score: 22.446060015399905
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Log-concave sampling has witnessed remarkable algorithmic advances in recent
years, but the corresponding problem of proving lower bounds for this task has
remained elusive, with lower bounds previously known only in dimension one. In
this work, we establish the following query lower bounds: (1) sampling from
strongly log-concave and log-smooth distributions in dimension $d\ge 2$
requires $\Omega(\log \kappa)$ queries, which is sharp in any constant
dimension, and (2) sampling from Gaussians in dimension $d$ (hence also from
general log-concave and log-smooth distributions in dimension $d$) requires
$\widetilde \Omega(\min(\sqrt\kappa \log d, d))$ queries, which is nearly sharp
for the class of Gaussians. Here $\kappa$ denotes the condition number of the
target distribution. Our proofs rely upon (1) a multiscale construction
inspired by work on the Kakeya conjecture in geometric measure theory, and (2)
a novel reduction that demonstrates that block Krylov algorithms are optimal
for this problem, as well as connections to lower bound techniques based on
Wishart matrices developed in the matrix-vector query literature.
Related papers
- Sum-of-squares lower bounds for Non-Gaussian Component Analysis [33.80749804695003]
Non-Gaussian Component Analysis (NGCA) is the statistical task of finding a non-Gaussian direction in a high-dimensional dataset.
Here we study the complexity of NGCA in the Sum-of-Squares framework.
arXiv Detail & Related papers (2024-10-28T18:19:13Z) - Efficient Frameworks for Generalized Low-Rank Matrix Bandit Problems [61.85150061213987]
We study the generalized low-rank matrix bandit problem, proposed in citelu2021low under the Generalized Linear Model (GLM) framework.
To overcome the computational infeasibility and theoretical restrain of existing algorithms, we first propose the G-ESTT framework.
We show that G-ESTT can achieve the $tildeO(sqrt(d_1+d_2)3/2Mr3/2T)$ bound of regret while G-ESTS can achineve the $tildeO
arXiv Detail & Related papers (2024-01-14T14:14:19Z) - Causal Bandits for Linear Structural Equation Models [58.2875460517691]
This paper studies the problem of designing an optimal sequence of interventions in a causal graphical model.
It is assumed that the graph's structure is known and has $N$ nodes.
Two algorithms are proposed for the frequentist (UCB-based) and Bayesian settings.
arXiv Detail & Related papers (2022-08-26T16:21:31Z) - Best Policy Identification in Linear MDPs [70.57916977441262]
We investigate the problem of best identification in discounted linear Markov+Delta Decision in the fixed confidence setting under a generative model.
The lower bound as the solution of an intricate non- optimization program can be used as the starting point to devise such algorithms.
arXiv Detail & Related papers (2022-08-11T04:12:50Z) - Minimax Optimal Quantization of Linear Models: Information-Theoretic
Limits and Efficient Algorithms [59.724977092582535]
We consider the problem of quantizing a linear model learned from measurements.
We derive an information-theoretic lower bound for the minimax risk under this setting.
We show that our method and upper-bounds can be extended for two-layer ReLU neural networks.
arXiv Detail & Related papers (2022-02-23T02:39:04Z) - Near-Optimal Statistical Query Lower Bounds for Agnostically Learning
Intersections of Halfspaces with Gaussian Marginals [10.06689891744466]
We consider the well-studied problem of learning intersections halfspaces under the Gaussian distribution in the challenging emphagnostic learning model.
We prove two variants of our lower bound, each of which combines ingredients from Diakonikolas et al. (2021) with (an extension of) a different earlier approach for SQ lower bounds for the Boolean setting.
arXiv Detail & Related papers (2022-02-10T15:34:10Z) - The query complexity of sampling from strongly log-concave distributions
in one dimension [14.18847457501901]
We establish the first tight lower bound of $Omega(loglogkappa)$ on the query of sampling from the class of strongly log-concave and log-smooth distributions.
We introduce a novel algorithm based on rejection that closes this exponential gap.
arXiv Detail & Related papers (2021-05-29T00:51:17Z) - Complexity of zigzag sampling algorithm for strongly log-concave
distributions [6.336005544376984]
We study the computational complexity of zigzag sampling algorithm for strongly log-concave distributions.
We prove that the zigzag sampling algorithm achieves $varepsilon$ error in chi-square divergence with a computational cost equivalent to $Obigl.
arXiv Detail & Related papers (2020-12-21T03:10:21Z) - Thresholded Lasso Bandit [70.17389393497125]
Thresholded Lasso bandit is an algorithm that estimates the vector defining the reward function as well as its sparse support.
We establish non-asymptotic regret upper bounds scaling as $mathcalO( log d + sqrtT )$ in general, and as $mathcalO( log d + sqrtT )$ under the so-called margin condition.
arXiv Detail & Related papers (2020-10-22T19:14:37Z) - Breaking the Sample Size Barrier in Model-Based Reinforcement Learning
with a Generative Model [50.38446482252857]
This paper is concerned with the sample efficiency of reinforcement learning, assuming access to a generative model (or simulator)
We first consider $gamma$-discounted infinite-horizon Markov decision processes (MDPs) with state space $mathcalS$ and action space $mathcalA$.
We prove that a plain model-based planning algorithm suffices to achieve minimax-optimal sample complexity given any target accuracy level.
arXiv Detail & Related papers (2020-05-26T17:53:18Z) - Oracle Lower Bounds for Stochastic Gradient Sampling Algorithms [39.746670539407084]
We consider the problem of sampling from a strongly log-concave density in $bbRd$.
We prove an information theoretic lower bound on the number of gradient queries of the log density needed.
arXiv Detail & Related papers (2020-02-01T23:46:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.