A law of adversarial risk, interpolation, and label noise
- URL: http://arxiv.org/abs/2207.03933v1
- Date: Fri, 8 Jul 2022 14:34:43 GMT
- Title: A law of adversarial risk, interpolation, and label noise
- Authors: Daniel Paleka, Amartya Sanyal
- Abstract summary: In supervised learning, it has been shown that label noise in the data can be interpolated without penalties on test accuracy under many circumstances.
We show that interpolating label noise induces adversarial vulnerability, and prove the first theorem showing the dependence of label noise and adversarial risk in terms of the data distribution.
- Score: 6.980076213134384
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In supervised learning, it has been shown that label noise in the data can be
interpolated without penalties on test accuracy under many circumstances. We
show that interpolating label noise induces adversarial vulnerability, and
prove the first theorem showing the dependence of label noise and adversarial
risk in terms of the data distribution. Our results are almost sharp without
accounting for the inductive bias of the learning algorithm. We also show that
inductive bias makes the effect of label noise much stronger.
Related papers
- Extracting Clean and Balanced Subset for Noisy Long-tailed Classification [66.47809135771698]
We develop a novel pseudo labeling method using class prototypes from the perspective of distribution matching.
By setting a manually-specific probability measure, we can reduce the side-effects of noisy and long-tailed data simultaneously.
Our method can extract this class-balanced subset with clean labels, which brings effective performance gains for long-tailed classification with label noise.
arXiv Detail & Related papers (2024-04-10T07:34:37Z) - Binary Classification with Instance and Label Dependent Label Noise [4.061135251278187]
We show that learning with noisy samples is impossible without access to clean samples or strong assumptions on the distribution of the data.
Our findings suggest that learning solely with noisy samples is impossible without access to clean samples or strong assumptions on the distribution of the data.
arXiv Detail & Related papers (2023-06-06T04:47:44Z) - Label Noise Robustness of Conformal Prediction [24.896717715256358]
We study the robustness of conformal prediction, a powerful tool for uncertainty quantification, to label noise.
Our analysis tackles both regression and classification problems.
We extend our theory and formulate the requirements for correctly controlling a general loss function.
arXiv Detail & Related papers (2022-09-28T17:59:35Z) - Combating Noise: Semi-supervised Learning by Region Uncertainty
Quantification [55.23467274564417]
Current methods are easily distracted by noisy regions generated by pseudo labels.
We propose noise-resistant semi-supervised learning by quantifying the region uncertainty.
Experiments on both PASCAL VOC and MS COCO demonstrate the extraordinary performance of our method.
arXiv Detail & Related papers (2021-11-01T13:23:42Z) - Label Noise in Adversarial Training: A Novel Perspective to Study Robust
Overfitting [45.58217741522973]
We show that label noise exists in adversarial training.
Such label noise is due to the mismatch between the true label distribution of adversarial examples and the label inherited from clean examples.
We propose a method to automatically calibrate the label to address the label noise and robust overfitting.
arXiv Detail & Related papers (2021-10-07T01:15:06Z) - Can Less be More? When Increasing-to-Balancing Label Noise Rates
Considered Beneficial [7.299247713124782]
We quantify the trade-offs introduced by increasing a certain group of instances' label noise rate.
We present a method to leverage our idea of inserting label noise for the task of learning with noisy labels.
arXiv Detail & Related papers (2021-07-13T08:31:57Z) - Open-set Label Noise Can Improve Robustness Against Inherent Label Noise [27.885927200376386]
We show that open-set noisy labels can be non-toxic and even benefit the robustness against inherent noisy labels.
We propose a simple yet effective regularization by introducing Open-set samples with Dynamic Noisy Labels (ODNL) into training.
arXiv Detail & Related papers (2021-06-21T07:15:50Z) - Tackling Instance-Dependent Label Noise via a Universal Probabilistic
Model [80.91927573604438]
This paper proposes a simple yet universal probabilistic model, which explicitly relates noisy labels to their instances.
Experiments on datasets with both synthetic and real-world label noise verify that the proposed method yields significant improvements on robustness.
arXiv Detail & Related papers (2021-01-14T05:43:51Z) - A Second-Order Approach to Learning with Instance-Dependent Label Noise [58.555527517928596]
The presence of label noise often misleads the training of deep neural networks.
We show that the errors in human-annotated labels are more likely to be dependent on the difficulty levels of tasks.
arXiv Detail & Related papers (2020-12-22T06:36:58Z) - Class2Simi: A Noise Reduction Perspective on Learning with Noisy Labels [98.13491369929798]
We propose a framework called Class2Simi, which transforms data points with noisy class labels to data pairs with noisy similarity labels.
Class2Simi is computationally efficient because not only this transformation is on-the-fly in mini-batches, but also it just changes loss on top of model prediction into a pairwise manner.
arXiv Detail & Related papers (2020-06-14T07:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.