From Noisy Prediction to True Label: Noisy Prediction Calibration via
Generative Model
- URL: http://arxiv.org/abs/2205.00690v1
- Date: Mon, 2 May 2022 07:15:45 GMT
- Title: From Noisy Prediction to True Label: Noisy Prediction Calibration via
Generative Model
- Authors: HeeSun Bae, Seungjae Shin, JoonHo Jang, Byeonghu Na, Kyungwoo Song and
Il-Chul Moon
- Abstract summary: Noisy Prediction (NPC) is a new approach to learning with noisy labels.
NPC corrects the noisy prediction from the pre-trained classifier to the true label as a post-processing scheme.
Our method boosts the classification performances of all baseline models on both synthetic and real-world datasets.
- Score: 22.722830935155223
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Noisy labels are inevitable yet problematic in machine learning society. It
ruins the generalization power of a classifier by making the classifier be
trained to be overfitted to wrong labels. Existing methods on noisy label have
focused on modifying classifier training procedure. It results in two possible
problems. First, these methods are not applicable to a pre-trained classifier
without further access into training. Second, it is not easy to train a
classifier and remove all of negative effects from noisy labels simultaneously.
From these problems, we suggests a new branch of approach, Noisy Prediction
Calibration (NPC) in learning with noisy labels. Through the introduction and
estimation of a new type of transition matrix via generative model, NPC
corrects the noisy prediction from the pre-trained classifier to the true label
as a post-processing scheme. We prove that NPC theoretically aligns with the
transition matrix based methods. Yet, NPC provides more accurate pathway to
estimate true label, even without involvement in classifier learning. Also, NPC
is applicable to any classifier trained with noisy label methods, if training
instances and its predictions are available. Our method, NPC, boosts the
classification performances of all baseline models on both synthetic and
real-world datasets.
Related papers
- Label-Retrieval-Augmented Diffusion Models for Learning from Noisy
Labels [61.97359362447732]
Learning from noisy labels is an important and long-standing problem in machine learning for real applications.
In this paper, we reformulate the label-noise problem from a generative-model perspective.
Our model achieves new state-of-the-art (SOTA) results on all the standard real-world benchmark datasets.
arXiv Detail & Related papers (2023-05-31T03:01:36Z) - Class Prototype-based Cleaner for Label Noise Learning [73.007001454085]
Semi-supervised learning methods are current SOTA solutions to the noisy-label learning problem.
We propose a simple yet effective solution, named textbfClass textbfPrototype-based label noise textbfCleaner.
arXiv Detail & Related papers (2022-12-21T04:56:41Z) - SELC: Self-Ensemble Label Correction Improves Learning with Noisy Labels [4.876988315151037]
Deep neural networks are prone to overfitting noisy labels, resulting in poor generalization performance.
We present a method self-ensemble label correction (SELC) to progressively correct noisy labels and refine the model.
SELC obtains more promising and stable results in the presence of class-conditional, instance-dependent, and real-world label noise.
arXiv Detail & Related papers (2022-05-02T18:42:47Z) - Prototypical Classifier for Robust Class-Imbalanced Learning [64.96088324684683]
We propose textitPrototypical, which does not require fitting additional parameters given the embedding network.
Prototypical produces balanced and comparable predictions for all classes even though the training set is class-imbalanced.
We test our method on CIFAR-10LT, CIFAR-100LT and Webvision datasets, observing that Prototypical obtains substaintial improvements compared with state of the arts.
arXiv Detail & Related papers (2021-10-22T01:55:01Z) - Extended T: Learning with Mixed Closed-set and Open-set Noisy Labels [86.5943044285146]
The label noise transition matrix $T$ reflects the probabilities that true labels flip into noisy ones.
In this paper, we focus on learning under the mixed closed-set and open-set label noise.
Our method can better model the mixed label noise, following its more robust performance than the prior state-of-the-art label-noise learning methods.
arXiv Detail & Related papers (2020-12-02T02:42:45Z) - Error-Bounded Correction of Noisy Labels [17.510654621245656]
We show that the prediction of a noisy classifier can indeed be a good indicator of whether the label of a training data is clean.
Based on the theoretical result, we propose a novel algorithm that corrects the labels based on the noisy classifier prediction.
We incorporate our label correction algorithm into the training of deep neural networks and train models that achieve superior testing performance on multiple public datasets.
arXiv Detail & Related papers (2020-11-19T19:23:23Z) - Class2Simi: A Noise Reduction Perspective on Learning with Noisy Labels [98.13491369929798]
We propose a framework called Class2Simi, which transforms data points with noisy class labels to data pairs with noisy similarity labels.
Class2Simi is computationally efficient because not only this transformation is on-the-fly in mini-batches, but also it just changes loss on top of model prediction into a pairwise manner.
arXiv Detail & Related papers (2020-06-14T07:55:32Z) - Multi-Class Classification from Noisy-Similarity-Labeled Data [98.13491369929798]
We propose a method for learning from only noisy-similarity-labeled data.
We use a noise transition matrix to bridge the class-posterior probability between clean and noisy data.
We build a novel learning system which can assign noise-free class labels for instances.
arXiv Detail & Related papers (2020-02-16T05:10:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.