Hardness of Learning Halfspaces with Massart Noise
- URL: http://arxiv.org/abs/2012.09720v1
- Date: Thu, 17 Dec 2020 16:43:11 GMT
- Title: Hardness of Learning Halfspaces with Massart Noise
- Authors: Ilias Diakonikolas and Daniel M. Kane
- Abstract summary: We study the complexity of PAC learning halfspaces in the presence of Massart (bounded) noise.
We show that there an exponential gap between the information-theoretically optimal error and the best error that can be achieved by a SQ algorithm.
- Score: 56.98280399449707
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the complexity of PAC learning halfspaces in the presence of Massart
(bounded) noise. Specifically, given labeled examples $(x, y)$ from a
distribution $D$ on $\mathbb{R}^{n} \times \{ \pm 1\}$ such that the marginal
distribution on $x$ is arbitrary and the labels are generated by an unknown
halfspace corrupted with Massart noise at rate $\eta<1/2$, we want to compute a
hypothesis with small misclassification error. Characterizing the efficient
learnability of halfspaces in the Massart model has remained a longstanding
open problem in learning theory.
Recent work gave a polynomial-time learning algorithm for this problem with
error $\eta+\epsilon$. This error upper bound can be far from the
information-theoretically optimal bound of $\mathrm{OPT}+\epsilon$. More recent
work showed that {\em exact learning}, i.e., achieving error
$\mathrm{OPT}+\epsilon$, is hard in the Statistical Query (SQ) model. In this
work, we show that there is an exponential gap between the
information-theoretically optimal error and the best error that can be achieved
by a polynomial-time SQ algorithm. In particular, our lower bound implies that
no efficient SQ algorithm can approximate the optimal error within any
polynomial factor.
Related papers
- Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Near-Optimal Bounds for Learning Gaussian Halfspaces with Random
Classification Noise [50.64137465792738]
We show that any efficient SQ algorithm for the problem requires sample complexity at least $Omega(d1/2/(maxp, epsilon)2)$.
Our lower bound suggests that this quadratic dependence on $1/epsilon$ is inherent for efficient algorithms.
arXiv Detail & Related papers (2023-07-13T18:59:28Z) - SQ Lower Bounds for Learning Single Neurons with Massart Noise [40.1662767099183]
PAC learning a single neuron in the presence of Massart noise.
We prove that no efficient SQ algorithm can approximate the optimal error within any constant factor.
arXiv Detail & Related papers (2022-10-18T15:58:00Z) - Cryptographic Hardness of Learning Halfspaces with Massart Noise [59.8587499110224]
We study the complexity of PAC learning halfspaces in the presence of Massart noise.
We show that no-time Massart halfspace learners can achieve error better than $Omega(eta)$, even if the optimal 0-1 error is small.
arXiv Detail & Related papers (2022-07-28T17:50:53Z) - Learning a Single Neuron with Adversarial Label Noise via Gradient
Descent [50.659479930171585]
We study a function of the form $mathbfxmapstosigma(mathbfwcdotmathbfx)$ for monotone activations.
The goal of the learner is to output a hypothesis vector $mathbfw$ that $F(mathbbw)=C, epsilon$ with high probability.
arXiv Detail & Related papers (2022-06-17T17:55:43Z) - Threshold Phenomena in Learning Halfspaces with Massart Noise [56.01192577666607]
We study the problem of PAC learning halfspaces on $mathbbRd$ with Massart noise under Gaussian marginals.
Our results qualitatively characterize the complexity of learning halfspaces in the Massart model.
arXiv Detail & Related papers (2021-08-19T16:16:48Z) - Classification Under Misspecification: Halfspaces, Generalized Linear
Models, and Connections to Evolvability [39.01599245403753]
In particular, we study the problem of learning halfspaces under Massart noise with rate $eta$.
We show any SQ algorithm requires super-polynomially many queries to achieve $mathsfOPT + epsilon$.
We also study our algorithm for learning halfspaces under Massart noise empirically and find that it exhibits some appealing fairness properties.
arXiv Detail & Related papers (2020-06-08T17:59:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.