Do We Need to Penalize Variance of Losses for Learning with Label Noise?
- URL: http://arxiv.org/abs/2201.12739v1
- Date: Sun, 30 Jan 2022 06:19:08 GMT
- Title: Do We Need to Penalize Variance of Losses for Learning with Label Noise?
- Authors: Yexiong Lin, Yu Yao, Yuxuan Du, Jun Yu, Bo Han, Mingming Gong,
Tongliang Liu
- Abstract summary: We find that the variance should be increased for the problem of learning with noisy labels.
By exploiting the label noise transition matrix, regularizers can be easily designed to reduce the variance of losses.
Empirically, the proposed method by increasing the variance of losses significantly improves the generalization ability of baselines on both synthetic and real-world datasets.
- Score: 91.38888889609002
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Algorithms which minimize the averaged loss have been widely designed for
dealing with noisy labels. Intuitively, when there is a finite training sample,
penalizing the variance of losses will improve the stability and generalization
of the algorithms. Interestingly, we found that the variance should be
increased for the problem of learning with noisy labels. Specifically,
increasing the variance will boost the memorization effects and reduce the
harmfulness of incorrect labels. By exploiting the label noise transition
matrix, regularizers can be easily designed to reduce the variance of losses
and be plugged in many existing algorithms. Empirically, the proposed method by
increasing the variance of losses significantly improves the generalization
ability of baselines on both synthetic and real-world datasets.
Related papers
- ERASE: Error-Resilient Representation Learning on Graphs for Label Noise
Tolerance [53.73316938815873]
We propose a method called ERASE (Error-Resilient representation learning on graphs for lAbel noiSe tolerancE) to learn representations with error tolerance.
ERASE combines prototype pseudo-labels with propagated denoised labels and updates representations with error resilience.
Our method can outperform multiple baselines with clear margins in broad noise levels and enjoy great scalability.
arXiv Detail & Related papers (2023-12-13T17:59:07Z) - Label Noise: Correcting the Forward-Correction [0.0]
Training neural network classifiers on datasets with label noise poses a risk of overfitting them to the noisy labels.
We propose an approach to tackling overfitting caused by label noise.
Motivated by this observation, we propose imposing a lower bound on the training loss to mitigate overfitting.
arXiv Detail & Related papers (2023-07-24T19:41:19Z) - All Points Matter: Entropy-Regularized Distribution Alignment for
Weakly-supervised 3D Segmentation [67.30502812804271]
Pseudo-labels are widely employed in weakly supervised 3D segmentation tasks where only sparse ground-truth labels are available for learning.
We propose a novel learning strategy to regularize the generated pseudo-labels and effectively narrow the gaps between pseudo-labels and model predictions.
arXiv Detail & Related papers (2023-05-25T08:19:31Z) - Sample Selection with Uncertainty of Losses for Learning with Noisy
Labels [145.06552420999986]
In learning with noisy labels, the sample selection approach is very popular, which regards small-loss data as correctly labeled during training.
However, losses are generated on-the-fly based on the model being trained with noisy labels, and thus large-loss data are likely but not certainly to be incorrect.
In this paper, we incorporate the uncertainty of losses by adopting interval estimation instead of point estimation of losses.
arXiv Detail & Related papers (2021-06-01T12:53:53Z) - An Exploration into why Output Regularization Mitigates Label Noise [0.0]
Noise robust losses is one of the more promising approaches for dealing with label noise.
We show that losses that incorporate an output regularization term, such as label smoothing and entropy, become symmetric as the regularization coefficient goes to infinity.
arXiv Detail & Related papers (2021-04-26T11:16:30Z) - Tackling Instance-Dependent Label Noise via a Universal Probabilistic
Model [80.91927573604438]
This paper proposes a simple yet universal probabilistic model, which explicitly relates noisy labels to their instances.
Experiments on datasets with both synthetic and real-world label noise verify that the proposed method yields significant improvements on robustness.
arXiv Detail & Related papers (2021-01-14T05:43:51Z) - Regularization in neural network optimization via trimmed stochastic
gradient descent with noisy label [2.66512000865131]
Regularization is essential for avoiding over-fitting to training data in neural network optimization.
We propose a first-order optimization method (Label-Noised Trim-SGD) which combines the label noise with the example trimming.
The proposed algorithm enables us to impose a large label noise and obtain a better regularization effect than the original methods.
arXiv Detail & Related papers (2020-12-21T01:31:53Z) - Meta Transition Adaptation for Robust Deep Learning with Noisy Labels [61.8970957519509]
This study proposes a new meta-transition-learning strategy for the task.
Specifically, through the sound guidance of a small set of meta data with clean labels, the noise transition matrix and the classifier parameters can be mutually ameliorated.
Our method can more accurately extract the transition matrix, naturally following its more robust performance than prior arts.
arXiv Detail & Related papers (2020-06-10T07:27:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.