Learning to Purify Noisy Labels via Meta Soft Label Corrector
- URL: http://arxiv.org/abs/2008.00627v1
- Date: Mon, 3 Aug 2020 03:25:17 GMT
- Title: Learning to Purify Noisy Labels via Meta Soft Label Corrector
- Authors: Yichen Wu, Jun Shu, Qi Xie, Qian Zhao and Deyu Meng
- Abstract summary: Recent deep neural networks (DNNs) can easily overfit to biased training data with noisy labels.
Label correction strategy is commonly used to alleviate this issue.
We propose a meta-learning model which could estimate soft labels through meta-gradient descent step.
- Score: 49.92310583232323
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent deep neural networks (DNNs) can easily overfit to biased training data
with noisy labels. Label correction strategy is commonly used to alleviate this
issue by designing a method to identity suspected noisy labels and then correct
them. Current approaches to correcting corrupted labels usually need certain
pre-defined label correction rules or manually preset hyper-parameters. These
fixed settings make it hard to apply in practice since the accurate label
correction usually related with the concrete problem, training data and the
temporal information hidden in dynamic iterations of training process. To
address this issue, we propose a meta-learning model which could estimate soft
labels through meta-gradient descent step under the guidance of noise-free meta
data. By viewing the label correction procedure as a meta-process and using a
meta-learner to automatically correct labels, we could adaptively obtain
rectified soft labels iteratively according to current training problems
without manually preset hyper-parameters. Besides, our method is model-agnostic
and we can combine it with any other existing model with ease. Comprehensive
experiments substantiate the superiority of our method in both synthetic and
real-world problems with noisy labels compared with current SOTA label
correction strategies.
Related papers
- Alternative Pseudo-Labeling for Semi-Supervised Automatic Speech
Recognition [49.42732949233184]
When labeled data is insufficient, semi-supervised learning with the pseudo-labeling technique can significantly improve the performance of automatic speech recognition.
Taking noisy labels as ground-truth in the loss function results in suboptimal performance.
We propose a novel framework named alternative pseudo-labeling to tackle the issue of noisy pseudo-labels.
arXiv Detail & Related papers (2023-08-12T12:13:52Z) - Label-Retrieval-Augmented Diffusion Models for Learning from Noisy
Labels [61.97359362447732]
Learning from noisy labels is an important and long-standing problem in machine learning for real applications.
In this paper, we reformulate the label-noise problem from a generative-model perspective.
Our model achieves new state-of-the-art (SOTA) results on all the standard real-world benchmark datasets.
arXiv Detail & Related papers (2023-05-31T03:01:36Z) - Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations [91.67511167969934]
imprecise label learning (ILL) is a framework for the unification of learning with various imprecise label configurations.
We demonstrate that ILL can seamlessly adapt to partial label learning, semi-supervised learning, noisy label learning, and, more importantly, a mixture of these settings.
arXiv Detail & Related papers (2023-05-22T04:50:28Z) - Learning from Noisy Labels with Decoupled Meta Label Purifier [33.87292143223425]
Training deep neural networks with noisy labels is challenging since DNN can easily memorize inaccurate labels.
In this paper, we propose a novel multi-stage label purifier named DMLP.
DMLP decouples the label correction process into label-free representation learning and a simple meta label purifier.
arXiv Detail & Related papers (2023-02-14T03:39:30Z) - Two Wrongs Don't Make a Right: Combating Confirmation Bias in Learning
with Label Noise [6.303101074386922]
Robust Label Refurbishment (Robust LR) is a new hybrid method that integrates pseudo-labeling and confidence estimation techniques to refurbish noisy labels.
We show that our method successfully alleviates the damage of both label noise and confirmation bias.
For example, Robust LR achieves up to 4.5% absolute top-1 accuracy improvement over the previous best on the real-world noisy dataset WebVision.
arXiv Detail & Related papers (2021-12-06T12:10:17Z) - Instance Correction for Learning with Open-set Noisy Labels [145.06552420999986]
We use the sample selection approach to handle open-set noisy labels.
The discarded data are seen to be mislabeled and do not participate in training.
We modify the instances of discarded data to make predictions for the discarded data consistent with given labels.
arXiv Detail & Related papers (2021-06-01T13:05:55Z) - Error-Bounded Correction of Noisy Labels [17.510654621245656]
We show that the prediction of a noisy classifier can indeed be a good indicator of whether the label of a training data is clean.
Based on the theoretical result, we propose a novel algorithm that corrects the labels based on the noisy classifier prediction.
We incorporate our label correction algorithm into the training of deep neural networks and train models that achieve superior testing performance on multiple public datasets.
arXiv Detail & Related papers (2020-11-19T19:23:23Z) - Learning Soft Labels via Meta Learning [3.4852307714135375]
One-hot labels do not represent soft decision boundaries among concepts, and hence, models trained on them are prone to overfitting.
We propose a framework, where we treat the labels as learnable parameters, and optimize them along with model parameters.
We show that learned labels capture semantic relationship between classes, and thereby improve teacher models for the downstream task of distillation.
arXiv Detail & Related papers (2020-09-20T18:42:13Z) - Meta Soft Label Generation for Noisy Labels [0.0]
We propose a Meta Soft Label Generation algorithm called MSLG.
MSLG can jointly generate soft labels using meta-learning techniques.
Our approach outperforms other state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2020-07-11T19:37:44Z) - Does label smoothing mitigate label noise? [57.76529645344897]
We show that label smoothing is competitive with loss-correction under label noise.
We show that when distilling models from noisy data, label smoothing of the teacher is beneficial.
arXiv Detail & Related papers (2020-03-05T18:43:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.