Does label smoothing mitigate label noise?
- URL: http://arxiv.org/abs/2003.02819v1
- Date: Thu, 5 Mar 2020 18:43:17 GMT
- Title: Does label smoothing mitigate label noise?
- Authors: Michal Lukasik, Srinadh Bhojanapalli, Aditya Krishna Menon, Sanjiv
Kumar
- Abstract summary: We show that label smoothing is competitive with loss-correction under label noise.
We show that when distilling models from noisy data, label smoothing of the teacher is beneficial.
- Score: 57.76529645344897
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Label smoothing is commonly used in training deep learning models, wherein
one-hot training labels are mixed with uniform label vectors. Empirically,
smoothing has been shown to improve both predictive performance and model
calibration. In this paper, we study whether label smoothing is also effective
as a means of coping with label noise. While label smoothing apparently
amplifies this problem --- being equivalent to injecting symmetric noise to the
labels --- we show how it relates to a general family of loss-correction
techniques from the label noise literature. Building on this connection, we
show that label smoothing is competitive with loss-correction under label
noise. Further, we show that when distilling models from noisy data, label
smoothing of the teacher is beneficial; this is in contrast to recent findings
for noise-free problems, and sheds further light on settings where label
smoothing is beneficial.
Related papers
- Clean Label Disentangling for Medical Image Segmentation with Noisy
Labels [25.180056839942345]
Current methods focusing on medical image segmentation suffer from incorrect annotations, which is known as the noisy label issue.
We propose a class-balanced sampling strategy to tackle the class-imbalanced problem.
We extend our clean label disentangling framework to a new noisy feature-aided clean label disentangling framework.
arXiv Detail & Related papers (2023-11-28T07:54:27Z) - Label Noise in Adversarial Training: A Novel Perspective to Study Robust
Overfitting [45.58217741522973]
We show that label noise exists in adversarial training.
Such label noise is due to the mismatch between the true label distribution of adversarial examples and the label inherited from clean examples.
We propose a method to automatically calibrate the label to address the label noise and robust overfitting.
arXiv Detail & Related papers (2021-10-07T01:15:06Z) - A Realistic Simulation Framework for Learning with Label Noise [17.14439597393087]
We show that this framework generates synthetic noisy labels that exhibit important characteristics of the label noise.
We also benchmark several existing algorithms for learning with noisy labels.
We propose a new technique, Label Quality Model (LQM), that leverages annotator features to predict and correct against noisy labels.
arXiv Detail & Related papers (2021-07-23T18:53:53Z) - Understanding (Generalized) Label Smoothing when Learning with Noisy
Labels [57.37057235894054]
Label smoothing (LS) is an arising learning paradigm that uses the positively weighted average of both the hard training labels and uniformly distributed soft labels.
We provide understandings for the properties of generalized label smoothing (GLS) when learning with noisy labels.
arXiv Detail & Related papers (2021-06-08T07:32:29Z) - Rethinking Noisy Label Models: Labeler-Dependent Noise with Adversarial
Awareness [2.1930130356902207]
We propose a principled model of label noise that generalizes instance-dependent noise to multiple labelers.
Under our labeler-dependent model, label noise manifests itself under two modalities: natural error of good-faith labelers, and adversarial labels provided by malicious actors.
We present two adversarial attack vectors that more accurately reflect the label noise that may be encountered in real-world settings.
arXiv Detail & Related papers (2021-05-28T19:58:18Z) - Is Label Smoothing Truly Incompatible with Knowledge Distillation: An
Empirical Study [59.95267695402516]
This work aims to empirically clarify that label smoothing is incompatible with knowledge distillation.
We provide a novel connection on how label smoothing affects distributions of semantically similar and dissimilar classes.
We study its one-sidedness and imperfection of the incompatibility view through massive analyses, visualizations and comprehensive experiments.
arXiv Detail & Related papers (2021-04-01T17:59:12Z) - Extended T: Learning with Mixed Closed-set and Open-set Noisy Labels [86.5943044285146]
The label noise transition matrix $T$ reflects the probabilities that true labels flip into noisy ones.
In this paper, we focus on learning under the mixed closed-set and open-set label noise.
Our method can better model the mixed label noise, following its more robust performance than the prior state-of-the-art label-noise learning methods.
arXiv Detail & Related papers (2020-12-02T02:42:45Z) - Learning to Purify Noisy Labels via Meta Soft Label Corrector [49.92310583232323]
Recent deep neural networks (DNNs) can easily overfit to biased training data with noisy labels.
Label correction strategy is commonly used to alleviate this issue.
We propose a meta-learning model which could estimate soft labels through meta-gradient descent step.
arXiv Detail & Related papers (2020-08-03T03:25:17Z) - Class2Simi: A Noise Reduction Perspective on Learning with Noisy Labels [98.13491369929798]
We propose a framework called Class2Simi, which transforms data points with noisy class labels to data pairs with noisy similarity labels.
Class2Simi is computationally efficient because not only this transformation is on-the-fly in mini-batches, but also it just changes loss on top of model prediction into a pairwise manner.
arXiv Detail & Related papers (2020-06-14T07:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.