Attribute-Efficient Learning of Halfspaces with Malicious Noise:
Near-Optimal Label Complexity and Noise Tolerance
- URL: http://arxiv.org/abs/2006.03781v5
- Date: Tue, 2 Mar 2021 07:19:55 GMT
- Title: Attribute-Efficient Learning of Halfspaces with Malicious Noise:
Near-Optimal Label Complexity and Noise Tolerance
- Authors: Jie Shen and Chicheng Zhang
- Abstract summary: This paper is concerned with computationally efficient learning of homogeneous sparse halfspaces in $mathbbRd$ under noise.
We show that the sample complexity is $tildeObig(frac 1 epsilon s2 log5 d big)$ which also enjoys the attribute efficiency.
- Score: 21.76197397540397
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper is concerned with computationally efficient learning of
homogeneous sparse halfspaces in $\mathbb{R}^d$ under noise. Though recent
works have established attribute-efficient learning algorithms under various
types of label noise (e.g. bounded noise), it remains an open question when and
how $s$-sparse halfspaces can be efficiently learned under the challenging
malicious noise model, where an adversary may corrupt both the unlabeled
examples and the labels. We answer this question in the affirmative by
designing a computationally efficient active learning algorithm with
near-optimal label complexity of $\tilde{O}\big({s \log^4 \frac d \epsilon}
\big)$ and noise tolerance $\eta = \Omega(\epsilon)$, where $\epsilon \in (0,
1)$ is the target error rate, under the assumption that the distribution over
(uncorrupted) unlabeled examples is isotropic log-concave. Our algorithm can be
straightforwardly tailored to the passive learning setting, and we show that
the sample complexity is $\tilde{O}\big({\frac 1 \epsilon s^2 \log^5 d} \big)$
which also enjoys the attribute efficiency. Our main techniques include
attribute-efficient paradigms for instance reweighting and for empirical risk
minimization, and a new analysis of uniform concentration for unbounded data --
all of them crucially take the structure of the underlying halfspace into
account.
Related papers
- Perturb-and-Project: Differentially Private Similarities and Marginals [73.98880839337873]
We revisit the input perturbations framework for differential privacy where noise is added to the input $Ain mathcalS$.
We first design novel efficient algorithms to privately release pair-wise cosine similarities.
We derive a novel algorithm to compute $k$-way marginal queries over $n$ features.
arXiv Detail & Related papers (2024-06-07T12:07:16Z) - Efficient Active Learning Halfspaces with Tsybakov Noise: A Non-convex Optimization Approach [28.518140532315776]
We study the problem of computationally label efficient active learning.
In this paper, we prove any approximate.
first-order stationary light point of a structured un-labeled data set.
arXiv Detail & Related papers (2023-10-23T23:55:28Z) - Near-Optimal Bounds for Learning Gaussian Halfspaces with Random
Classification Noise [50.64137465792738]
We show that any efficient SQ algorithm for the problem requires sample complexity at least $Omega(d1/2/(maxp, epsilon)2)$.
Our lower bound suggests that this quadratic dependence on $1/epsilon$ is inherent for efficient algorithms.
arXiv Detail & Related papers (2023-07-13T18:59:28Z) - Information-Computation Tradeoffs for Learning Margin Halfspaces with
Random Classification Noise [50.64137465792738]
We study the problem of PAC $gamma$-margin halfspaces with Random Classification Noise.
We establish an information-computation tradeoff suggesting an inherent gap between the sample complexity of the problem and the sample complexity of computationally efficient algorithms.
arXiv Detail & Related papers (2023-06-28T16:33:39Z) - Under-bagging Nearest Neighbors for Imbalanced Classification [63.026765294759876]
We propose an ensemble learning algorithm called textitunder-bagging $k$-NN (textitunder-bagging $k$-NN) for imbalanced classification problems.
arXiv Detail & Related papers (2021-09-01T14:10:38Z) - Improved Algorithms for Efficient Active Learning Halfspaces with
Massart and Tsybakov noise [29.890039126644776]
We develop a PAC active learning algorithm for $d$-dimensional homogeneous halfspaces that can tolerate Massart noisecitepmassart2006risk and Tsybakov noiseciteptsybakov2004.
Under the more challenging Tsybakov noise condition, we identify two subfamilies of noise conditions, under which our algorithm achieves computational efficiency and provide label complexity guarantees strictly lower than passive learning algorithms.
arXiv Detail & Related papers (2021-02-10T08:17:17Z) - On the Power of Localized Perceptron for Label-Optimal Learning of
Halfspaces with Adversarial Noise [4.8728183994912415]
We study em online active learning of homogeneous halfspaces in $mathbbRd$ with adversarial noise.
Our main contribution is a Perceptron-like active learning algorithm that runs in time.
Our algorithm learns the underlying halfspace with near-optimal label complexity.
arXiv Detail & Related papers (2020-12-19T22:04:10Z) - Hardness of Learning Halfspaces with Massart Noise [56.98280399449707]
We study the complexity of PAC learning halfspaces in the presence of Massart (bounded) noise.
We show that there an exponential gap between the information-theoretically optimal error and the best error that can be achieved by a SQ algorithm.
arXiv Detail & Related papers (2020-12-17T16:43:11Z) - Learning Halfspaces with Tsybakov Noise [50.659479930171585]
We study the learnability of halfspaces in the presence of Tsybakov noise.
We give an algorithm that achieves misclassification error $epsilon$ with respect to the true halfspace.
arXiv Detail & Related papers (2020-06-11T14:25:02Z) - Efficient active learning of sparse halfspaces with arbitrary bounded
noise [34.406103025985445]
We study active learning of homogeneous $s$-sparse halfspaces in $mathbbRd$ under the setting where the unlabeled data distribution is isotropic log-concave.
Under high levels of label noise, the label complexity bounds achieved by computationally efficient algorithms are much worse.
This is the first efficient algorithm with label complexity in $frac11-2eta$ in this setting, which is label-efficient even for $eta$ arbitrarily close to $frac12$.
arXiv Detail & Related papers (2020-02-12T08:28:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.