MetaLabelNet: Learning to Generate Soft-Labels from Noisy-Labels
- URL: http://arxiv.org/abs/2103.10869v1
- Date: Fri, 19 Mar 2021 15:47:44 GMT
- Title: MetaLabelNet: Learning to Generate Soft-Labels from Noisy-Labels
- Authors: G\"orkem Algan, Ilkay Ulusoy
- Abstract summary: Real-world datasets commonly have noisy labels, which negatively affects the performance of deep neural networks (DNNs)
We propose a label noise robust learning algorithm, in which the base classifier is trained on soft-labels that are produced according to a meta-objective.
Our algorithm uses a small amount of clean data as meta-data, which can be obtained effortlessly for many cases.
- Score: 0.20305676256390928
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Real-world datasets commonly have noisy labels, which negatively affects the
performance of deep neural networks (DNNs). In order to address this problem,
we propose a label noise robust learning algorithm, in which the base
classifier is trained on soft-labels that are produced according to a
meta-objective. In each iteration, before conventional training, the
meta-objective reshapes the loss function by changing soft-labels, so that
resulting gradient updates would lead to model parameters with minimum loss on
meta-data. Soft-labels are generated from extracted features of data instances,
and the mapping function is learned by a single layer perceptron (SLP) network,
which is called MetaLabelNet. Following, base classifier is trained by using
these generated soft-labels. These iterations are repeated for each batch of
training data. Our algorithm uses a small amount of clean data as meta-data,
which can be obtained effortlessly for many cases. We perform extensive
experiments on benchmark datasets with both synthetic and real-world noises.
Results show that our approach outperforms existing baselines.
Related papers
- Group Benefits Instances Selection for Data Purification [21.977432359384835]
Existing methods for combating label noise are typically designed and tested on synthetic datasets.
We propose a method named GRIP to alleviate the noisy label problem for both synthetic and real-world datasets.
arXiv Detail & Related papers (2024-03-23T03:06:19Z) - ERASE: Error-Resilient Representation Learning on Graphs for Label Noise
Tolerance [53.73316938815873]
We propose a method called ERASE (Error-Resilient representation learning on graphs for lAbel noiSe tolerancE) to learn representations with error tolerance.
ERASE combines prototype pseudo-labels with propagated denoised labels and updates representations with error resilience.
Our method can outperform multiple baselines with clear margins in broad noise levels and enjoy great scalability.
arXiv Detail & Related papers (2023-12-13T17:59:07Z) - BadLabel: A Robust Perspective on Evaluating and Enhancing Label-noise
Learning [113.8799653759137]
We introduce a novel label noise type called BadLabel, which can significantly degrade the performance of existing LNL algorithms by a large margin.
BadLabel is crafted based on the label-flipping attack against standard classification.
We propose a robust LNL method that perturbs the labels in an adversarial manner at each epoch to make the loss values of clean and noisy labels again distinguishable.
arXiv Detail & Related papers (2023-05-28T06:26:23Z) - Learning advisor networks for noisy image classification [22.77447144331876]
We introduce the novel concept of advisor network to address the problem of noisy labels in image classification.
We trained it with a meta-learning strategy so that it can adapt throughout the training of the main model.
We tested our method on CIFAR10 and CIFAR100 with synthetic noise, and on Clothing1M which contains real-world noise, reporting state-of-the-art results.
arXiv Detail & Related papers (2022-11-08T11:44:08Z) - Scalable Penalized Regression for Noise Detection in Learning with Noisy
Labels [44.79124350922491]
We propose using a theoretically guaranteed noisy label detection framework to detect and remove noisy data for Learning with Noisy Labels (LNL)
Specifically, we design a penalized regression to model the linear relation between network features and one-hot labels.
To make the framework scalable to datasets that contain a large number of categories and training data, we propose a split algorithm to divide the whole training set into small pieces.
arXiv Detail & Related papers (2022-03-15T11:09:58Z) - L2B: Learning to Bootstrap Robust Models for Combating Label Noise [52.02335367411447]
This paper introduces a simple and effective method, named Learning to Bootstrap (L2B)
It enables models to bootstrap themselves using their own predictions without being adversely affected by erroneous pseudo-labels.
It achieves this by dynamically adjusting the importance weight between real observed and generated labels, as well as between different samples through meta-learning.
arXiv Detail & Related papers (2022-02-09T05:57:08Z) - Instance-dependent Label-noise Learning under a Structural Causal Model [92.76400590283448]
Label noise will degenerate the performance of deep learning algorithms.
By leveraging a structural causal model, we propose a novel generative approach for instance-dependent label-noise learning.
arXiv Detail & Related papers (2021-09-07T10:42:54Z) - Noisy Labels Can Induce Good Representations [53.47668632785373]
We study how architecture affects learning with noisy labels.
We show that training with noisy labels can induce useful hidden representations, even when the model generalizes poorly.
This finding leads to a simple method to improve models trained on noisy labels.
arXiv Detail & Related papers (2020-12-23T18:58:05Z) - Meta Soft Label Generation for Noisy Labels [0.0]
We propose a Meta Soft Label Generation algorithm called MSLG.
MSLG can jointly generate soft labels using meta-learning techniques.
Our approach outperforms other state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2020-07-11T19:37:44Z) - Temporal Calibrated Regularization for Robust Noisy Label Learning [60.90967240168525]
Deep neural networks (DNNs) exhibit great success on many tasks with the help of large-scale well annotated datasets.
However, labeling large-scale data can be very costly and error-prone so that it is difficult to guarantee the annotation quality.
We propose a Temporal Calibrated Regularization (TCR) in which we utilize the original labels and the predictions in the previous epoch together.
arXiv Detail & Related papers (2020-07-01T04:48:49Z) - Label Noise Types and Their Effects on Deep Learning [0.0]
In this work, we provide a detailed analysis of the effects of different kinds of label noise on learning.
We propose a generic framework to generate feature-dependent label noise, which we show to be the most challenging case for learning.
For the ease of other researchers to test their algorithms with noisy labels, we share corrupted labels for the most commonly used benchmark datasets.
arXiv Detail & Related papers (2020-03-23T18:03:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.