Alleviating Noisy-label Effects in Image Classification via Probability
Transition Matrix
- URL: http://arxiv.org/abs/2110.08866v2
- Date: Tue, 19 Oct 2021 16:56:14 GMT
- Title: Alleviating Noisy-label Effects in Image Classification via Probability
Transition Matrix
- Authors: Ziqi Zhang, Yuexiang Li, Hongxin Wei, Kai Ma, Tao Xu, Yefeng Zheng
- Abstract summary: Deep-learning-based image classification frameworks often suffer from the noisy label problem caused by the inter-observer variation.
We propose a plugin module, namely noise ignoring block (NIB), to separate the hard samples from the mislabeled ones.
Our NIB module consistently improves the performances of the state-of-the-art robust training methods.
- Score: 30.532481130511137
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep-learning-based image classification frameworks often suffer from the
noisy label problem caused by the inter-observer variation. Recent studies
employed learning-to-learn paradigms (e.g., Co-teaching and JoCoR) to filter
the samples with noisy labels from the training set. However, most of them use
a simple cross-entropy loss as the criterion for noisy label identification.
The hard samples, which are beneficial for classifier learning, are often
mistakenly treated as noises in such a setting since both the hard samples and
ones with noisy labels lead to a relatively larger loss value than the easy
cases. In this paper, we propose a plugin module, namely noise ignoring block
(NIB), consisting of a probability transition matrix and an inter-class
correlation (IC) loss, to separate the hard samples from the mislabeled ones,
and further boost the accuracy of image classification network trained with
noisy labels. Concretely, our IC loss is calculated as Kullback-Leibler
divergence between the network prediction and the accumulative soft label
generated by the probability transition matrix. Such that, with the lower value
of IC loss, the hard cases can be easily distinguished from mislabeled cases.
Extensive experiments are conducted on natural and medical image datasets
(CIFAR-10 and ISIC 2019). The experimental results show that our NIB module
consistently improves the performances of the state-of-the-art robust training
methods.
Related papers
- Combating Label Noise With A General Surrogate Model For Sample
Selection [84.61367781175984]
We propose to leverage the vision-language surrogate model CLIP to filter noisy samples automatically.
We validate the effectiveness of our proposed method on both real-world and synthetic noisy datasets.
arXiv Detail & Related papers (2023-10-16T14:43:27Z) - Multi-Label Noise Transition Matrix Estimation with Label Correlations:
Theory and Algorithm [73.94839250910977]
Noisy multi-label learning has garnered increasing attention due to the challenges posed by collecting large-scale accurate labels.
The introduction of transition matrices can help model multi-label noise and enable the development of statistically consistent algorithms.
We propose a novel estimator that leverages label correlations without the need for anchor points or precise fitting of noisy class posteriors.
arXiv Detail & Related papers (2023-09-22T08:35:38Z) - Category-Adaptive Label Discovery and Noise Rejection for Multi-label
Image Recognition with Partial Positive Labels [78.88007892742438]
Training multi-label models with partial positive labels (MLR-PPL) attracts increasing attention.
Previous works regard unknown labels as negative and adopt traditional MLR algorithms.
We propose to explore semantic correlation among different images to facilitate the MLR-PPL task.
arXiv Detail & Related papers (2022-11-15T02:11:20Z) - Learning from Noisy Labels with Coarse-to-Fine Sample Credibility
Modeling [22.62790706276081]
Training deep neural network (DNN) with noisy labels is practically challenging.
Previous efforts tend to handle part or full data in a unified denoising flow.
We propose a coarse-to-fine robust learning method called CREMA to handle noisy data in a divide-and-conquer manner.
arXiv Detail & Related papers (2022-08-23T02:06:38Z) - S3: Supervised Self-supervised Learning under Label Noise [53.02249460567745]
In this paper we address the problem of classification in the presence of label noise.
In the heart of our method is a sample selection mechanism that relies on the consistency between the annotated label of a sample and the distribution of the labels in its neighborhood in the feature space.
Our method significantly surpasses previous methods on both CIFARCIFAR100 with artificial noise and real-world noisy datasets such as WebVision and ANIMAL-10N.
arXiv Detail & Related papers (2021-11-22T15:49:20Z) - Mitigating Memorization in Sample Selection for Learning with Noisy
Labels [4.679610943608667]
We propose a criteria to penalize dominant-noisy-labeled samples intensively through class-wise penalty labels.
Using the proposed sample selection, the learning process of the network becomes significantly robust to noisy labels.
arXiv Detail & Related papers (2021-07-08T06:44:04Z) - Tackling Instance-Dependent Label Noise via a Universal Probabilistic
Model [80.91927573604438]
This paper proposes a simple yet universal probabilistic model, which explicitly relates noisy labels to their instances.
Experiments on datasets with both synthetic and real-world label noise verify that the proposed method yields significant improvements on robustness.
arXiv Detail & Related papers (2021-01-14T05:43:51Z) - Multi-Objective Interpolation Training for Robustness to Label Noise [17.264550056296915]
We show that standard supervised contrastive learning degrades in the presence of label noise.
We propose a novel label noise detection method that exploits the robust feature representations learned via contrastive learning.
Experiments on synthetic and real-world noise benchmarks demonstrate that MOIT/MOIT+ achieves state-of-the-art results.
arXiv Detail & Related papers (2020-12-08T15:01:54Z) - Salvage Reusable Samples from Noisy Data for Robust Learning [70.48919625304]
We propose a reusable sample selection and correction approach, termed as CRSSC, for coping with label noise in training deep FG models with web images.
Our key idea is to additionally identify and correct reusable samples, and then leverage them together with clean examples to update the networks.
arXiv Detail & Related papers (2020-08-06T02:07:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.