Smoothing the Landscape Boosts the Signal for SGD: Optimal Sample
Complexity for Learning Single Index Models
- URL: http://arxiv.org/abs/2305.10633v1
- Date: Thu, 18 May 2023 01:10:11 GMT
- Title: Smoothing the Landscape Boosts the Signal for SGD: Optimal Sample
Complexity for Learning Single Index Models
- Authors: Alex Damian, Eshaan Nichani, Rong Ge, Jason D. Lee
- Abstract summary: We focus on the task of learning a single index model $sigma(wstar cdot x)$ with respect to the isotropic Gaussian distribution in $d$ dimensions.
We show that online SGD on a smoothed loss learns $wstar$ with $n gtrsim dkstar/2$ samples.
- Score: 43.83997656986799
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We focus on the task of learning a single index model $\sigma(w^\star \cdot
x)$ with respect to the isotropic Gaussian distribution in $d$ dimensions.
Prior work has shown that the sample complexity of learning $w^\star$ is
governed by the information exponent $k^\star$ of the link function $\sigma$,
which is defined as the index of the first nonzero Hermite coefficient of
$\sigma$. Ben Arous et al. (2021) showed that $n \gtrsim d^{k^\star-1}$ samples
suffice for learning $w^\star$ and that this is tight for online SGD. However,
the CSQ lower bound for gradient based methods only shows that $n \gtrsim
d^{k^\star/2}$ samples are necessary. In this work, we close the gap between
the upper and lower bounds by showing that online SGD on a smoothed loss learns
$w^\star$ with $n \gtrsim d^{k^\star/2}$ samples. We also draw connections to
statistical analyses of tensor PCA and to the implicit regularization effects
of minibatch SGD on empirical losses.
Related papers
- Sample and Computationally Efficient Robust Learning of Gaussian Single-Index Models [37.42736399673992]
A single-index model (SIM) is a function of the form $sigma(mathbfwast cdot mathbfx)$, where $sigma: mathbbR to mathbbR$ is a known link function and $mathbfwast$ is a hidden unit vector.
We show that a proper learner attains $L2$-error of $O(mathrmOPT)+epsilon$, where $
arXiv Detail & Related papers (2024-11-08T17:10:38Z) - Dimension-free Private Mean Estimation for Anisotropic Distributions [55.86374912608193]
Previous private estimators on distributions over $mathRd suffer from a curse of dimensionality.
We present an algorithm whose sample complexity has improved dependence on dimension.
arXiv Detail & Related papers (2024-11-01T17:59:53Z) - Sum-of-squares lower bounds for Non-Gaussian Component Analysis [33.80749804695003]
Non-Gaussian Component Analysis (NGCA) is the statistical task of finding a non-Gaussian direction in a high-dimensional dataset.
Here we study the complexity of NGCA in the Sum-of-Squares framework.
arXiv Detail & Related papers (2024-10-28T18:19:13Z) - Learning Orthogonal Multi-Index Models: A Fine-Grained Information Exponent Analysis [45.05072391903122]
Information exponent plays important role in predicting sample complexity of online gradient descent.
For multi-index models, focusing solely on the lowest degree can miss key structural details.
We show that by considering both second- and higher-order terms, we can first learn the relevant space via the second-order terms.
arXiv Detail & Related papers (2024-10-13T00:14:08Z) - Detection of Dense Subhypergraphs by Low-Degree Polynomials [72.4451045270967]
Detection of a planted dense subgraph in a random graph is a fundamental statistical and computational problem.
We consider detecting the presence of a planted $Gr(ngamma, n-alpha)$ subhypergraph in a $Gr(n, n-beta) hypergraph.
Our results are already new in the graph case $r=2$, as we consider the subtle log-density regime where hardness based on average-case reductions is not known.
arXiv Detail & Related papers (2023-04-17T10:38:08Z) - Structure Learning in Graphical Models from Indirect Observations [17.521712510832558]
This paper considers learning of the graphical structure of a $p$-dimensional random vector $X in Rp$ using both parametric and non-parametric methods.
Under mild conditions, we show that our graph-structure estimator can obtain the correct structure.
arXiv Detail & Related papers (2022-05-06T19:24:44Z) - On the Power of Multitask Representation Learning in Linear MDP [61.58929164172968]
This paper presents analyses for the statistical benefit of multitask representation learning in linear Markov Decision Process (MDP)
We first discover a emphLeast-Activated-Feature-Abundance (LAFA) criterion, denoted as $kappa$, with which we prove that a straightforward least-square algorithm learns a policy which is $tildeO(H2sqrtfrackappa mathcalC(Phi)2 kappa dNT+frackappa dn)
arXiv Detail & Related papers (2021-06-15T11:21:06Z) - Sample-Efficient Reinforcement Learning for Linearly-Parameterized MDPs
with a Generative Model [3.749193647980305]
This paper considers a Markov decision process (MDP) that admits a set of state-action features.
We show that a model-based approach (resp.$$Q-learning) provably learns an $varepsilon$-optimal policy with high probability.
arXiv Detail & Related papers (2021-05-28T17:49:39Z) - Model-Free Reinforcement Learning: from Clipped Pseudo-Regret to Sample
Complexity [59.34067736545355]
Given an MDP with $S$ states, $A$ actions, the discount factor $gamma in (0,1)$, and an approximation threshold $epsilon > 0$, we provide a model-free algorithm to learn an $epsilon$-optimal policy.
For small enough $epsilon$, we show an improved algorithm with sample complexity.
arXiv Detail & Related papers (2020-06-06T13:34:41Z) - Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and
Variance Reduction [63.41789556777387]
Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP)
We show that the number of samples needed to yield an entrywise $varepsilon$-accurate estimate of the Q-function is at most on the order of $frac1mu_min (1-gamma)5varepsilon2+ fract_mixmu_min (1-gamma)$ up to some logarithmic factor.
arXiv Detail & Related papers (2020-06-04T17:51:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.