Tight Time-Space Lower Bounds for Constant-Pass Learning
- URL: http://arxiv.org/abs/2310.08070v1
- Date: Thu, 12 Oct 2023 06:36:31 GMT
- Title: Tight Time-Space Lower Bounds for Constant-Pass Learning
- Authors: Xin Lyu, Avishay Tal, Hongxun Wu, Junzhao Yang
- Abstract summary: We prove tight memory-sample lower bounds for any parity learning algorithm that makes $q$ passes over the stream of samples.
We show that such a learner requires either $Omega(n2)$ memory size or at least $2Omega(n)$ samples.
Similar to prior work, our results extend to any learning problem with many nearly-orthogonal concepts.
- Score: 1.7387739961208148
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In his breakthrough paper, Raz showed that any parity learning algorithm
requires either quadratic memory or an exponential number of samples [FOCS'16,
JACM'19]. A line of work that followed extended this result to a large class of
learning problems. Until recently, all these results considered learning in the
streaming model, where each sample is drawn independently, and the learner is
allowed a single pass over the stream of samples. Garg, Raz, and Tal [CCC'19]
considered a stronger model, allowing multiple passes over the stream. In the
$2$-pass model, they showed that learning parities of size $n$ requires either
a memory of size $n^{1.5}$ or at least $2^{\sqrt{n}}$ samples. (Their result
also generalizes to other learning problems.)
In this work, for any constant $q$, we prove tight memory-sample lower bounds
for any parity learning algorithm that makes $q$ passes over the stream of
samples. We show that such a learner requires either $\Omega(n^{2})$ memory
size or at least $2^{\Omega(n)}$ samples. Beyond establishing a tight lower
bound, this is the first non-trivial lower bound for $q$-pass learning for any
$q\ge 3$. Similar to prior work, our results extend to any learning problem
with many nearly-orthogonal concepts.
We complement the lower bound with an upper bound, showing that parity
learning with $q$ passes can be done efficiently with $O(n^2/\log q)$ memory.
Related papers
- Optimal algorithms for learning quantum phase states [8.736370689844682]
We show that the sample complexity of learning an unknown degree-$d$ phase state is $Theta(nd)$ if we allow separable measurements.
We also consider learning phase states when $f$ has sparsity-$s$, degree-$d$ in its $mathbbF$ representation.
arXiv Detail & Related papers (2022-08-16T17:15:06Z) - Cryptographic Hardness of Learning Halfspaces with Massart Noise [59.8587499110224]
We study the complexity of PAC learning halfspaces in the presence of Massart noise.
We show that no-time Massart halfspace learners can achieve error better than $Omega(eta)$, even if the optimal 0-1 error is small.
arXiv Detail & Related papers (2022-07-28T17:50:53Z) - Strong Memory Lower Bounds for Learning Natural Models [16.900376638975978]
We give lower bounds on the amount of memory required by one-pass streaming algorithms.
We show that algorithms which learn using a near-minimal number of examples, $tilde O(kappa)$, must use $tilde Omega( dkappa)$ bits of space.
arXiv Detail & Related papers (2022-06-09T19:35:47Z) - Tight Bounds on the Hardness of Learning Simple Nonparametric Mixtures [9.053430799456587]
We study the problem of learning nonparametric distributions in a finite mixture.
We establish tight bounds on the sample complexity for learning the component distributions in such models.
arXiv Detail & Related papers (2022-03-28T23:53:48Z) - Memory-Sample Lower Bounds for Learning Parity with Noise [2.724141845301679]
We show, for the well-studied problem of learning parity under noise, that any learning algorithm requires either a memory of size $Omega(n2/varepsilon)$ or an exponential number of samples.
Our proof is based on adapting the arguments in [Raz'17,GRT'18] to the noisy case.
arXiv Detail & Related papers (2021-07-05T23:34:39Z) - On the Power of Multitask Representation Learning in Linear MDP [61.58929164172968]
This paper presents analyses for the statistical benefit of multitask representation learning in linear Markov Decision Process (MDP)
We first discover a emphLeast-Activated-Feature-Abundance (LAFA) criterion, denoted as $kappa$, with which we prove that a straightforward least-square algorithm learns a policy which is $tildeO(H2sqrtfrackappa mathcalC(Phi)2 kappa dNT+frackappa dn)
arXiv Detail & Related papers (2021-06-15T11:21:06Z) - Learning a Latent Simplex in Input-Sparsity Time [58.30321592603066]
We consider the problem of learning a latent $k$-vertex simplex $KsubsetmathbbRdtimes n$, given access to $AinmathbbRdtimes n$.
We show that the dependence on $k$ in the running time is unnecessary given a natural assumption about the mass of the top $k$ singular values of $A$.
arXiv Detail & Related papers (2021-05-17T16:40:48Z) - Model-Free Reinforcement Learning: from Clipped Pseudo-Regret to Sample
Complexity [59.34067736545355]
Given an MDP with $S$ states, $A$ actions, the discount factor $gamma in (0,1)$, and an approximation threshold $epsilon > 0$, we provide a model-free algorithm to learn an $epsilon$-optimal policy.
For small enough $epsilon$, we show an improved algorithm with sample complexity.
arXiv Detail & Related papers (2020-06-06T13:34:41Z) - Few-Shot Learning via Learning the Representation, Provably [115.7367053639605]
This paper studies few-shot learning via representation learning.
One uses $T$ source tasks with $n_1$ data per task to learn a representation in order to reduce the sample complexity of a target task.
arXiv Detail & Related papers (2020-02-21T17:30:00Z) - Quantum Coupon Collector [62.58209964224025]
We study how efficiently a $k$-element set $Ssubseteq[n]$ can be learned from a uniform superposition $|Srangle of its elements.
We give tight bounds on the number of quantum samples needed for every $k$ and $n$, and we give efficient quantum learning algorithms.
arXiv Detail & Related papers (2020-02-18T16:14:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.