Efficient PAC Learning of Halfspaces with Constant Malicious Noise Rate
- URL: http://arxiv.org/abs/2410.01186v2
- Date: Thu, 17 Oct 2024 12:50:06 GMT
- Title: Efficient PAC Learning of Halfspaces with Constant Malicious Noise Rate
- Authors: Jie Shen, Xiaoyu Li,
- Abstract summary: We study the problem of computationally efficient PAC learning of halfspaces in the presence of malicious noise.
We show that it is possible to achieve em constant noise tolerance by minimizing a reweighted hinge loss.
- Score: 14.461730902968862
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Understanding noise tolerance of learning algorithms under certain conditions is a central quest in learning theory. In this work, we study the problem of computationally efficient PAC learning of halfspaces in the presence of malicious noise, where an adversary can corrupt both instances and labels of training samples. The best-known noise tolerance either depends on a target error rate under distributional assumptions or on a margin parameter under large-margin conditions. In this work, we show that when both types of conditions are satisfied, it is possible to achieve {\em constant} noise tolerance by minimizing a reweighted hinge loss. Our key ingredients include: 1) an efficient algorithm that finds weights to control the gradient deterioration from corrupted samples, and 2) a new analysis on the robustness of the hinge loss equipped with such weights.
Related papers
- Label Noise: Correcting the Forward-Correction [0.0]
Training neural network classifiers on datasets with label noise poses a risk of overfitting them to the noisy labels.
We propose an approach to tackling overfitting caused by label noise.
Motivated by this observation, we propose imposing a lower bound on the training loss to mitigate overfitting.
arXiv Detail & Related papers (2023-07-24T19:41:19Z) - Latent Class-Conditional Noise Model [54.56899309997246]
We introduce a Latent Class-Conditional Noise model (LCCN) to parameterize the noise transition under a Bayesian framework.
We then deduce a dynamic label regression method for LCCN, whose Gibbs sampler allows us efficiently infer the latent true labels.
Our approach safeguards the stable update of the noise transition, which avoids previous arbitrarily tuning from a mini-batch of samples.
arXiv Detail & Related papers (2023-02-19T15:24:37Z) - Error Mitigation Thresholds in Noisy Random Quantum Circuits [0.30723404270319693]
We study the robustness of probabilistic error cancellation and tensor network error mitigation when the noise is imperfectly characterized.
For one-dimensional circuits, error mitigation fails at an $mathcalO(1)$ time for any imperfection in the characterization of disorder.
We discuss further implications for tests of quantum computational advantage, fault-tolerant probes of measurement-induced phase transitions, and quantum algorithms in near-term devices.
arXiv Detail & Related papers (2023-02-08T19:00:01Z) - Improve Noise Tolerance of Robust Loss via Noise-Awareness [60.34670515595074]
We propose a meta-learning method which is capable of adaptively learning a hyper parameter prediction function, called Noise-Aware-Robust-Loss-Adjuster (NARL-Adjuster for brevity)
Four SOTA robust loss functions are attempted to be integrated with our algorithm, and comprehensive experiments substantiate the general availability and effectiveness of the proposed method in both its noise tolerance and performance.
arXiv Detail & Related papers (2023-01-18T04:54:58Z) - Evaluating the Resilience of Variational Quantum Algorithms to Leakage
Noise [6.467585493563487]
Leakage noise is a damaging source of error that error correction approaches cannot handle.
The impact of this noise on the performance of variational quantum algorithms (VQAs) is yet unknown.
arXiv Detail & Related papers (2022-08-10T14:50:14Z) - The Optimal Noise in Noise-Contrastive Learning Is Not What You Think [80.07065346699005]
We show that deviating from this assumption can actually lead to better statistical estimators.
In particular, the optimal noise distribution is different from the data's and even from a different family.
arXiv Detail & Related papers (2022-03-02T13:59:20Z) - Learning Halfspaces with Tsybakov Noise [50.659479930171585]
We study the learnability of halfspaces in the presence of Tsybakov noise.
We give an algorithm that achieves misclassification error $epsilon$ with respect to the true halfspace.
arXiv Detail & Related papers (2020-06-11T14:25:02Z) - Efficiently Learning Adversarially Robust Halfspaces with Noise [69.01459748050453]
We study the problem of learning adversarially robust halfspaces in the distribution-independent setting.
In the realizable setting, we provide necessary and sufficient conditions on the adversarial perturbation sets under which halfspaces are efficiently robustly learnable.
arXiv Detail & Related papers (2020-05-15T17:13:54Z) - Towards Noise-resistant Object Detection with Noisy Annotations [119.63458519946691]
Training deep object detectors requires significant amount of human-annotated images with accurate object labels and bounding box coordinates.
Noisy annotations are much more easily accessible, but they could be detrimental for learning.
We address the challenging problem of training object detectors with noisy annotations, where the noise contains a mixture of label noise and bounding box noise.
arXiv Detail & Related papers (2020-03-03T01:32:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.