Co-Correcting: Noise-tolerant Medical Image Classification via mutual
Label Correction
- URL: http://arxiv.org/abs/2109.05159v1
- Date: Sat, 11 Sep 2021 02:09:52 GMT
- Title: Co-Correcting: Noise-tolerant Medical Image Classification via mutual
Label Correction
- Authors: Jiarun Liu, Ruirui Li, Chuan Sun
- Abstract summary: This paper proposes a noise-tolerant medical image classification framework named Co-Correcting.
It significantly improves classification accuracy and obtains more accurate labels through dual-network mutual learning, label probability estimation, and curriculum label correcting.
Experiments show that Co-Correcting achieves the best accuracy and generalization under different noise ratios in various tasks.
- Score: 5.994566233473544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the development of deep learning, medical image classification has been
significantly improved. However, deep learning requires massive data with
labels. While labeling the samples by human experts is expensive and
time-consuming, collecting labels from crowd-sourcing suffers from the noises
which may degenerate the accuracy of classifiers. Therefore, approaches that
can effectively handle label noises are highly desired. Unfortunately, recent
progress on handling label noise in deep learning has gone largely unnoticed by
the medical image. To fill the gap, this paper proposes a noise-tolerant
medical image classification framework named Co-Correcting, which significantly
improves classification accuracy and obtains more accurate labels through
dual-network mutual learning, label probability estimation, and curriculum
label correcting. On two representative medical image datasets and the MNIST
dataset, we test six latest Learning-with-Noisy-Labels methods and conduct
comparative studies. The experiments show that Co-Correcting achieves the best
accuracy and generalization under different noise ratios in various tasks. Our
project can be found at: https://github.com/JiarunLiu/Co-Correcting.
Related papers
- Self-Relaxed Joint Training: Sample Selection for Severity Estimation with Ordinal Noisy Labels [5.892066196730197]
We propose a new framework for training with ordinal'' noisy labels.
Our framework uses two techniques: clean sample selection and dual-network architecture.
By appropriately using the soft and hard labels in the two techniques, we achieve more accurate sample selection and robust network training.
arXiv Detail & Related papers (2024-10-29T09:23:09Z) - How does self-supervised pretraining improve robustness against noisy
labels across various medical image classification datasets? [9.371321044764624]
Noisy labels can significantly impact medical image classification, particularly in deep learning.
Self-supervised pretraining, which doesn't rely on labeled data, can enhance robustness against noisy labels.
Our results show that DermNet, among five datasets, is the most challenging but exhibits greater robustness against noisy labels.
arXiv Detail & Related papers (2024-01-15T22:29:23Z) - Robust Medical Image Classification from Noisy Labeled Data with Global
and Local Representation Guided Co-training [73.60883490436956]
We propose a novel collaborative training paradigm with global and local representation learning for robust medical image classification.
We employ the self-ensemble model with a noisy label filter to efficiently select the clean and noisy samples.
We also design a novel global and local representation learning scheme to implicitly regularize the networks to utilize noisy samples.
arXiv Detail & Related papers (2022-05-10T07:50:08Z) - Two Wrongs Don't Make a Right: Combating Confirmation Bias in Learning
with Label Noise [6.303101074386922]
Robust Label Refurbishment (Robust LR) is a new hybrid method that integrates pseudo-labeling and confidence estimation techniques to refurbish noisy labels.
We show that our method successfully alleviates the damage of both label noise and confirmation bias.
For example, Robust LR achieves up to 4.5% absolute top-1 accuracy improvement over the previous best on the real-world noisy dataset WebVision.
arXiv Detail & Related papers (2021-12-06T12:10:17Z) - Noisy Label Learning for Large-scale Medical Image Classification [37.79118840129632]
We adapt a state-of-the-art noisy-label multi-class training approach to learn a multi-label classifier for the dataset Chest X-ray14.
We show that the majority of label noise on Chest X-ray14 is present in the class 'No Finding', which is intuitively correct because this is the most likely class to contain one or more of the 14 diseases due to labelling mistakes.
arXiv Detail & Related papers (2021-03-06T07:42:36Z) - Improving Medical Image Classification with Label Noise Using
Dual-uncertainty Estimation [72.0276067144762]
We discuss and define the two common types of label noise in medical images.
We propose an uncertainty estimation-based framework to handle these two label noise amid the medical image classification task.
arXiv Detail & Related papers (2021-02-28T14:56:45Z) - A Second-Order Approach to Learning with Instance-Dependent Label Noise [58.555527517928596]
The presence of label noise often misleads the training of deep neural networks.
We show that the errors in human-annotated labels are more likely to be dependent on the difficulty levels of tasks.
arXiv Detail & Related papers (2020-12-22T06:36:58Z) - Exploiting Context for Robustness to Label Noise in Active Learning [47.341705184013804]
We address the problems of how a system can identify which of the queried labels are wrong and how a multi-class active learning system can be adapted to minimize the negative impact of label noise.
We construct a graphical representation of the unlabeled data to encode these relationships and obtain new beliefs on the graph when noisy labels are available.
This is demonstrated in three different applications: scene classification, activity classification, and document classification.
arXiv Detail & Related papers (2020-10-18T18:59:44Z) - Attention-Aware Noisy Label Learning for Image Classification [97.26664962498887]
Deep convolutional neural networks (CNNs) learned on large-scale labeled samples have achieved remarkable progress in computer vision.
The cheapest way to obtain a large body of labeled visual data is to crawl from websites with user-supplied labels, such as Flickr.
This paper proposes the attention-aware noisy label learning approach to improve the discriminative capability of the network trained on datasets with potential label noise.
arXiv Detail & Related papers (2020-09-30T15:45:36Z) - Class2Simi: A Noise Reduction Perspective on Learning with Noisy Labels [98.13491369929798]
We propose a framework called Class2Simi, which transforms data points with noisy class labels to data pairs with noisy similarity labels.
Class2Simi is computationally efficient because not only this transformation is on-the-fly in mini-batches, but also it just changes loss on top of model prediction into a pairwise manner.
arXiv Detail & Related papers (2020-06-14T07:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.