Hard Sample Aware Noise Robust Learning for Histopathology Image
Classification
- URL: http://arxiv.org/abs/2112.03694v1
- Date: Sun, 5 Dec 2021 11:07:55 GMT
- Title: Hard Sample Aware Noise Robust Learning for Histopathology Image
Classification
- Authors: Chuang Zhu, Wenkai Chen, Ting Peng, Ying Wang, Mulan Jin
- Abstract summary: We introduce a novel hard sample aware noise robust learning method for histopathology image classification.
To distinguish the informative hard samples from the harmful noisy ones, we build an easy/hard/noisy (EHN) detection model.
We propose a noise suppressing and hard enhancing (NSHE) scheme to train the noise robust model.
- Score: 4.75542005200538
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning-based histopathology image classification is a key technique to
help physicians in improving the accuracy and promptness of cancer diagnosis.
However, the noisy labels are often inevitable in the complex manual annotation
process, and thus mislead the training of the classification model. In this
work, we introduce a novel hard sample aware noise robust learning method for
histopathology image classification. To distinguish the informative hard
samples from the harmful noisy ones, we build an easy/hard/noisy (EHN)
detection model by using the sample training history. Then we integrate the EHN
into a self-training architecture to lower the noise rate through gradually
label correction. With the obtained almost clean dataset, we further propose a
noise suppressing and hard enhancing (NSHE) scheme to train the noise robust
model. Compared with the previous works, our method can save more clean samples
and can be directly applied to the real-world noisy dataset scenario without
using a clean subset. Experimental results demonstrate that the proposed scheme
outperforms the current state-of-the-art methods in both the synthetic and
real-world noisy datasets. The source code and data are available at
https://github.com/bupt-ai-cz/HSA-NRL/.
Related papers
- Learning with Noisy Foundation Models [95.50968225050012]
This paper is the first work to comprehensively understand and analyze the nature of noise in pre-training datasets.
We propose a tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise and improve generalization.
arXiv Detail & Related papers (2024-03-11T16:22:41Z) - Combating Label Noise With A General Surrogate Model For Sample
Selection [84.61367781175984]
We propose to leverage the vision-language surrogate model CLIP to filter noisy samples automatically.
We validate the effectiveness of our proposed method on both real-world and synthetic noisy datasets.
arXiv Detail & Related papers (2023-10-16T14:43:27Z) - Instance-dependent Noisy-label Learning with Graphical Model Based Noise-rate Estimation [16.283722126438125]
Label Noise Learning (LNL) incorporates a sample selection stage to differentiate clean and noisy-label samples.
Such curriculum is sub-optimal since it does not consider the actual label noise rate in the training set.
This paper addresses this issue with a new noise-rate estimation method that is easily integrated with most state-of-the-art (SOTA) LNL methods.
arXiv Detail & Related papers (2023-05-31T01:46:14Z) - Latent Class-Conditional Noise Model [54.56899309997246]
We introduce a Latent Class-Conditional Noise model (LCCN) to parameterize the noise transition under a Bayesian framework.
We then deduce a dynamic label regression method for LCCN, whose Gibbs sampler allows us efficiently infer the latent true labels.
Our approach safeguards the stable update of the noise transition, which avoids previous arbitrarily tuning from a mini-batch of samples.
arXiv Detail & Related papers (2023-02-19T15:24:37Z) - Robust Medical Image Classification from Noisy Labeled Data with Global
and Local Representation Guided Co-training [73.60883490436956]
We propose a novel collaborative training paradigm with global and local representation learning for robust medical image classification.
We employ the self-ensemble model with a noisy label filter to efficiently select the clean and noisy samples.
We also design a novel global and local representation learning scheme to implicitly regularize the networks to utilize noisy samples.
arXiv Detail & Related papers (2022-05-10T07:50:08Z) - Contrastive Learning Improves Model Robustness Under Label Noise [3.756550107432323]
We show that by initializing supervised robust methods using representations learned through contrastive learning leads to significantly improved performance under label noise.
Even the simplest method can outperform the state-of-the-art SSL method by more than 50% under high label noise when with contrastive learning.
arXiv Detail & Related papers (2021-04-19T00:27:58Z) - Deep k-NN for Noisy Labels [55.97221021252733]
We show that a simple $k$-nearest neighbor-based filtering approach on the logit layer of a preliminary model can remove mislabeled data and produce more accurate models than many recently proposed methods.
arXiv Detail & Related papers (2020-04-26T05:15:36Z) - Rectified Meta-Learning from Noisy Labels for Robust Image-based Plant
Disease Diagnosis [64.82680813427054]
Plant diseases serve as one of main threats to food security and crop production.
One popular approach is to transform this problem as a leaf image classification task, which can be addressed by the powerful convolutional neural networks (CNNs)
We propose a novel framework that incorporates rectified meta-learning module into common CNN paradigm to train a noise-robust deep network without using extra supervision information.
arXiv Detail & Related papers (2020-03-17T09:51:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.