Meta Soft Label Generation for Noisy Labels
- URL: http://arxiv.org/abs/2007.05836v2
- Date: Tue, 19 Jan 2021 09:59:38 GMT
- Title: Meta Soft Label Generation for Noisy Labels
- Authors: G\"orkem Algan, Ilkay Ulusoy
- Abstract summary: We propose a Meta Soft Label Generation algorithm called MSLG.
MSLG can jointly generate soft labels using meta-learning techniques.
Our approach outperforms other state-of-the-art methods by a large margin.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The existence of noisy labels in the dataset causes significant performance
degradation for deep neural networks (DNNs). To address this problem, we
propose a Meta Soft Label Generation algorithm called MSLG, which can jointly
generate soft labels using meta-learning techniques and learn DNN parameters in
an end-to-end fashion. Our approach adapts the meta-learning paradigm to
estimate optimal label distribution by checking gradient directions on both
noisy training data and noise-free meta-data. In order to iteratively update
soft labels, meta-gradient descent step is performed on estimated labels, which
would minimize the loss of noise-free meta samples. In each iteration, the base
classifier is trained on estimated meta labels. MSLG is model-agnostic and can
be added on top of any existing model at hand with ease. We performed extensive
experiments on CIFAR10, Clothing1M and Food101N datasets. Results show that our
approach outperforms other state-of-the-art methods by a large margin.
Related papers
- BatMan-CLR: Making Few-shots Meta-Learners Resilient Against Label Noise [5.67944073225624]
We present the first analysis of the impact of varying levels of label noise on the performance of state-of-the-art meta-learners.
We show that the accuracy of Reptile, iMAML, and foMAML drops by up to 42% on the Omniglot and CifarFS datasets when meta-training is affected by label noise.
We propose two sampling techniques, namely manifold (Man) and batch manifold (BatMan), which transform the noisy supervised learners into semi-supervised ones.
arXiv Detail & Related papers (2023-09-12T08:30:35Z) - Label-Retrieval-Augmented Diffusion Models for Learning from Noisy
Labels [61.97359362447732]
Learning from noisy labels is an important and long-standing problem in machine learning for real applications.
In this paper, we reformulate the label-noise problem from a generative-model perspective.
Our model achieves new state-of-the-art (SOTA) results on all the standard real-world benchmark datasets.
arXiv Detail & Related papers (2023-05-31T03:01:36Z) - Learning from Noisy Labels with Decoupled Meta Label Purifier [33.87292143223425]
Training deep neural networks with noisy labels is challenging since DNN can easily memorize inaccurate labels.
In this paper, we propose a novel multi-stage label purifier named DMLP.
DMLP decouples the label correction process into label-free representation learning and a simple meta label purifier.
arXiv Detail & Related papers (2023-02-14T03:39:30Z) - Pseudo-Labeling Based Practical Semi-Supervised Meta-Training for Few-Shot Learning [93.63638405586354]
We propose a simple and effective meta-training framework, called pseudo-labeling based meta-learning (PLML)
Firstly, we train a classifier via common semi-supervised learning (SSL) and use it to obtain the pseudo-labels of unlabeled data.
We build few-shot tasks from labeled and pseudo-labeled data and design a novel finetuning method with feature smoothing and noise suppression.
arXiv Detail & Related papers (2022-07-14T10:53:53Z) - MetaLabelNet: Learning to Generate Soft-Labels from Noisy-Labels [0.20305676256390928]
Real-world datasets commonly have noisy labels, which negatively affects the performance of deep neural networks (DNNs)
We propose a label noise robust learning algorithm, in which the base classifier is trained on soft-labels that are produced according to a meta-objective.
Our algorithm uses a small amount of clean data as meta-data, which can be obtained effortlessly for many cases.
arXiv Detail & Related papers (2021-03-19T15:47:44Z) - Tackling Instance-Dependent Label Noise via a Universal Probabilistic
Model [80.91927573604438]
This paper proposes a simple yet universal probabilistic model, which explicitly relates noisy labels to their instances.
Experiments on datasets with both synthetic and real-world label noise verify that the proposed method yields significant improvements on robustness.
arXiv Detail & Related papers (2021-01-14T05:43:51Z) - Delving Deep into Label Smoothing [112.24527926373084]
Label smoothing is an effective regularization tool for deep neural networks (DNNs)
We present an Online Label Smoothing (OLS) strategy, which generates soft labels based on the statistics of the model prediction for the target category.
arXiv Detail & Related papers (2020-11-25T08:03:11Z) - Semi-supervised Relation Extraction via Incremental Meta Self-Training [56.633441255756075]
Semi-Supervised Relation Extraction methods aim to leverage unlabeled data in addition to learning from limited samples.
Existing self-training methods suffer from the gradual drift problem, where noisy pseudo labels on unlabeled data are incorporated during training.
We propose a method called MetaSRE, where a Relation Label Generation Network generates quality assessment on pseudo labels by (meta) learning from the successful and failed attempts on Relation Classification Network as an additional meta-objective.
arXiv Detail & Related papers (2020-10-06T03:54:11Z) - Learning Soft Labels via Meta Learning [3.4852307714135375]
One-hot labels do not represent soft decision boundaries among concepts, and hence, models trained on them are prone to overfitting.
We propose a framework, where we treat the labels as learnable parameters, and optimize them along with model parameters.
We show that learned labels capture semantic relationship between classes, and thereby improve teacher models for the downstream task of distillation.
arXiv Detail & Related papers (2020-09-20T18:42:13Z) - Learning to Purify Noisy Labels via Meta Soft Label Corrector [49.92310583232323]
Recent deep neural networks (DNNs) can easily overfit to biased training data with noisy labels.
Label correction strategy is commonly used to alleviate this issue.
We propose a meta-learning model which could estimate soft labels through meta-gradient descent step.
arXiv Detail & Related papers (2020-08-03T03:25:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.