Classification Under Misspecification: Halfspaces, Generalized Linear
Models, and Connections to Evolvability
- URL: http://arxiv.org/abs/2006.04787v2
- Date: Wed, 20 Sep 2023 14:40:02 GMT
- Title: Classification Under Misspecification: Halfspaces, Generalized Linear
Models, and Connections to Evolvability
- Authors: Sitan Chen, Frederic Koehler, Ankur Moitra, Morris Yau
- Abstract summary: In particular, we study the problem of learning halfspaces under Massart noise with rate $eta$.
We show any SQ algorithm requires super-polynomially many queries to achieve $mathsfOPT + epsilon$.
We also study our algorithm for learning halfspaces under Massart noise empirically and find that it exhibits some appealing fairness properties.
- Score: 39.01599245403753
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we revisit some classic problems on classification under
misspecification. In particular, we study the problem of learning halfspaces
under Massart noise with rate $\eta$. In a recent work, Diakonikolas,
Goulekakis, and Tzamos resolved a long-standing problem by giving the first
efficient algorithm for learning to accuracy $\eta + \epsilon$ for any
$\epsilon > 0$. However, their algorithm outputs a complicated hypothesis,
which partitions space into $\text{poly}(d,1/\epsilon)$ regions. Here we give a
much simpler algorithm and in the process resolve a number of outstanding open
questions:
(1) We give the first proper learner for Massart halfspaces that achieves
$\eta + \epsilon$. We also give improved bounds on the sample complexity
achievable by polynomial time algorithms.
(2) Based on (1), we develop a blackbox knowledge distillation procedure to
convert an arbitrarily complex classifier to an equally good proper classifier.
(3) By leveraging a simple but overlooked connection to evolvability, we show
any SQ algorithm requires super-polynomially many queries to achieve
$\mathsf{OPT} + \epsilon$.
Moreover we study generalized linear models where $\mathbb{E}[Y|\mathbf{X}] =
\sigma(\langle \mathbf{w}^*, \mathbf{X}\rangle)$ for any odd, monotone, and
Lipschitz function $\sigma$. This family includes the previously mentioned
halfspace models as a special case, but is much richer and includes other
fundamental models like logistic regression. We introduce a challenging new
corruption model that generalizes Massart noise, and give a general algorithm
for learning in this setting. Our algorithms are based on a small set of core
recipes for learning to classify in the presence of misspecification.
Finally we study our algorithm for learning halfspaces under Massart noise
empirically and find that it exhibits some appealing fairness properties.
Related papers
- Efficient Algorithms for Learning Monophonic Halfspaces in Graphs [7.619404259039284]
We prove several novel results for learning monophonic halfspaces in the supervised, online, and active settings.
Our main result is that a monophonic halfspace can be learned with near-optimal complexity in time in $n = |V(G)|$.
We also show that the concept class can be enumerated with delay $operatornamepoly(n)$, and that empirical risk can be performed in time $2omega(G)operatornamepoly(n)$.
arXiv Detail & Related papers (2024-05-01T20:34:12Z) - Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Near-Optimal Bounds for Learning Gaussian Halfspaces with Random
Classification Noise [50.64137465792738]
We show that any efficient SQ algorithm for the problem requires sample complexity at least $Omega(d1/2/(maxp, epsilon)2)$.
Our lower bound suggests that this quadratic dependence on $1/epsilon$ is inherent for efficient algorithms.
arXiv Detail & Related papers (2023-07-13T18:59:28Z) - Cryptographic Hardness of Learning Halfspaces with Massart Noise [59.8587499110224]
We study the complexity of PAC learning halfspaces in the presence of Massart noise.
We show that no-time Massart halfspace learners can achieve error better than $Omega(eta)$, even if the optimal 0-1 error is small.
arXiv Detail & Related papers (2022-07-28T17:50:53Z) - Threshold Phenomena in Learning Halfspaces with Massart Noise [56.01192577666607]
We study the problem of PAC learning halfspaces on $mathbbRd$ with Massart noise under Gaussian marginals.
Our results qualitatively characterize the complexity of learning halfspaces in the Massart model.
arXiv Detail & Related papers (2021-08-19T16:16:48Z) - Hardness of Learning Halfspaces with Massart Noise [56.98280399449707]
We study the complexity of PAC learning halfspaces in the presence of Massart (bounded) noise.
We show that there an exponential gap between the information-theoretically optimal error and the best error that can be achieved by a SQ algorithm.
arXiv Detail & Related papers (2020-12-17T16:43:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.