Learning from Noisy Labels with Decoupled Meta Label Purifier
- URL: http://arxiv.org/abs/2302.06810v2
- Date: Wed, 15 Feb 2023 05:09:06 GMT
- Title: Learning from Noisy Labels with Decoupled Meta Label Purifier
- Authors: Yuanpeng Tu, Boshen Zhang, Yuxi Li, Liang Liu, Jian Li, Jiangning
Zhang, Yabiao Wang, Chengjie Wang, Cai Rong Zhao
- Abstract summary: Training deep neural networks with noisy labels is challenging since DNN can easily memorize inaccurate labels.
In this paper, we propose a novel multi-stage label purifier named DMLP.
DMLP decouples the label correction process into label-free representation learning and a simple meta label purifier.
- Score: 33.87292143223425
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Training deep neural networks(DNN) with noisy labels is challenging since DNN
can easily memorize inaccurate labels, leading to poor generalization ability.
Recently, the meta-learning based label correction strategy is widely adopted
to tackle this problem via identifying and correcting potential noisy labels
with the help of a small set of clean validation data. Although training with
purified labels can effectively improve performance, solving the meta-learning
problem inevitably involves a nested loop of bi-level optimization between
model weights and hyper-parameters (i.e., label distribution). As compromise,
previous methods resort to a coupled learning process with alternating update.
In this paper, we empirically find such simultaneous optimization over both
model weights and label distribution can not achieve an optimal routine,
consequently limiting the representation ability of backbone and accuracy of
corrected labels. From this observation, a novel multi-stage label purifier
named DMLP is proposed. DMLP decouples the label correction process into
label-free representation learning and a simple meta label purifier. In this
way, DMLP can focus on extracting discriminative feature and label correction
in two distinctive stages. DMLP is a plug-and-play label purifier, the purified
labels can be directly reused in naive end-to-end network retraining or other
robust learning methods, where state-of-the-art results are obtained on several
synthetic and real-world noisy datasets, especially under high noise levels.
Related papers
- Online Multi-Label Classification under Noisy and Changing Label Distribution [9.17381554071824]
We propose an online multi-label classification algorithm under Noisy and Changing Label Distribution (NCLD)
The objective is to simultaneously model the label scoring and the label ranking for high accuracy, whose robustness to NCLD benefits from three novel works.
arXiv Detail & Related papers (2024-10-03T11:16:43Z) - Inaccurate Label Distribution Learning with Dependency Noise [52.08553913094809]
We introduce the Dependent Noise-based Inaccurate Label Distribution Learning (DN-ILDL) framework to tackle the challenges posed by noise in label distribution learning.
We show that DN-ILDL effectively addresses the ILDL problem and outperforms existing LDL methods.
arXiv Detail & Related papers (2024-05-26T07:58:07Z) - Label-Retrieval-Augmented Diffusion Models for Learning from Noisy
Labels [61.97359362447732]
Learning from noisy labels is an important and long-standing problem in machine learning for real applications.
In this paper, we reformulate the label-noise problem from a generative-model perspective.
Our model achieves new state-of-the-art (SOTA) results on all the standard real-world benchmark datasets.
arXiv Detail & Related papers (2023-05-31T03:01:36Z) - All Points Matter: Entropy-Regularized Distribution Alignment for
Weakly-supervised 3D Segmentation [67.30502812804271]
Pseudo-labels are widely employed in weakly supervised 3D segmentation tasks where only sparse ground-truth labels are available for learning.
We propose a novel learning strategy to regularize the generated pseudo-labels and effectively narrow the gaps between pseudo-labels and model predictions.
arXiv Detail & Related papers (2023-05-25T08:19:31Z) - Dist-PU: Positive-Unlabeled Learning from a Label Distribution
Perspective [89.5370481649529]
We propose a label distribution perspective for PU learning in this paper.
Motivated by this, we propose to pursue the label distribution consistency between predicted and ground-truth label distributions.
Experiments on three benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-12-06T07:38:29Z) - Two Wrongs Don't Make a Right: Combating Confirmation Bias in Learning
with Label Noise [6.303101074386922]
Robust Label Refurbishment (Robust LR) is a new hybrid method that integrates pseudo-labeling and confidence estimation techniques to refurbish noisy labels.
We show that our method successfully alleviates the damage of both label noise and confirmation bias.
For example, Robust LR achieves up to 4.5% absolute top-1 accuracy improvement over the previous best on the real-world noisy dataset WebVision.
arXiv Detail & Related papers (2021-12-06T12:10:17Z) - An Ensemble Noise-Robust K-fold Cross-Validation Selection Method for
Noisy Labels [0.9699640804685629]
Large-scale datasets tend to contain mislabeled samples that can be memorized by deep neural networks (DNNs)
We present Ensemble Noise-robust K-fold Cross-Validation Selection (E-NKCVS) to effectively select clean samples from noisy data.
We evaluate our approach on various image and text classification tasks where the labels have been manually corrupted with different noise ratios.
arXiv Detail & Related papers (2021-07-06T02:14:52Z) - Semi-supervised Relation Extraction via Incremental Meta Self-Training [56.633441255756075]
Semi-Supervised Relation Extraction methods aim to leverage unlabeled data in addition to learning from limited samples.
Existing self-training methods suffer from the gradual drift problem, where noisy pseudo labels on unlabeled data are incorporated during training.
We propose a method called MetaSRE, where a Relation Label Generation Network generates quality assessment on pseudo labels by (meta) learning from the successful and failed attempts on Relation Classification Network as an additional meta-objective.
arXiv Detail & Related papers (2020-10-06T03:54:11Z) - Learning to Purify Noisy Labels via Meta Soft Label Corrector [49.92310583232323]
Recent deep neural networks (DNNs) can easily overfit to biased training data with noisy labels.
Label correction strategy is commonly used to alleviate this issue.
We propose a meta-learning model which could estimate soft labels through meta-gradient descent step.
arXiv Detail & Related papers (2020-08-03T03:25:17Z) - Meta Soft Label Generation for Noisy Labels [0.0]
We propose a Meta Soft Label Generation algorithm called MSLG.
MSLG can jointly generate soft labels using meta-learning techniques.
Our approach outperforms other state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2020-07-11T19:37:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.