Deep Learning from Small Amount of Medical Data with Noisy Labels: A
Meta-Learning Approach
- URL: http://arxiv.org/abs/2010.06939v2
- Date: Mon, 15 Feb 2021 08:42:57 GMT
- Title: Deep Learning from Small Amount of Medical Data with Noisy Labels: A
Meta-Learning Approach
- Authors: G\"orkem Algan, Ilkay Ulusoy, \c{S}aban G\"on\"ul, Banu Turgut, Berker
Bakbak
- Abstract summary: Computer vision systems require correctly labeled large datasets in order to be trained properly.
Medical imaging datasets are commonly tiny, which makes each data very important in learning.
A label-noise-robust learning algorithm that makes use of the meta-learning paradigm is proposed in this article.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Computer vision systems recently made a big leap thanks to deep neural
networks. However, these systems require correctly labeled large datasets in
order to be trained properly, which is very difficult to obtain for medical
applications. Two main reasons for label noise in medical applications are the
high complexity of the data and conflicting opinions of experts. Moreover,
medical imaging datasets are commonly tiny, which makes each data very
important in learning. As a result, if not handled properly, label noise
significantly degrades the performance. Therefore, a label-noise-robust
learning algorithm that makes use of the meta-learning paradigm is proposed in
this article. The proposed solution is tested on retinopathy of prematurity
(ROP) dataset with a very high label noise of 68%. Results show that the
proposed algorithm significantly improves the classification algorithm's
performance in the presence of noisy labels.
Related papers
- Noisy Label Processing for Classification: A Survey [2.8821062918162146]
In the long, tedious process of data annotation, annotators are prone to make mistakes, resulting in incorrect labels of images.
It is crucial to combat noisy labels for computer vision tasks, especially for classification tasks.
We propose an algorithm to generate a synthetic label noise pattern guided by real-world data.
arXiv Detail & Related papers (2024-04-05T15:11:09Z) - ERASE: Error-Resilient Representation Learning on Graphs for Label Noise
Tolerance [53.73316938815873]
We propose a method called ERASE (Error-Resilient representation learning on graphs for lAbel noiSe tolerancE) to learn representations with error tolerance.
ERASE combines prototype pseudo-labels with propagated denoised labels and updates representations with error resilience.
Our method can outperform multiple baselines with clear margins in broad noise levels and enjoy great scalability.
arXiv Detail & Related papers (2023-12-13T17:59:07Z) - Inconsistency Ranking-based Noisy Label Detection for High-quality Data [11.844624139434867]
This paper proposes an automatic noisy label detection (NLD) technique with inconsistency ranking for high-quality data.
We investigate both inter-class and intra-class inconsistency ranking and compare several metric learning loss functions under different noise settings.
Experimental results confirm that the proposed solution could increase both the efficient and effective cleaning of large-scale speaker recognition datasets.
arXiv Detail & Related papers (2022-12-01T03:09:33Z) - Robust Medical Image Classification from Noisy Labeled Data with Global
and Local Representation Guided Co-training [73.60883490436956]
We propose a novel collaborative training paradigm with global and local representation learning for robust medical image classification.
We employ the self-ensemble model with a noisy label filter to efficiently select the clean and noisy samples.
We also design a novel global and local representation learning scheme to implicitly regularize the networks to utilize noisy samples.
arXiv Detail & Related papers (2022-05-10T07:50:08Z) - CvS: Classification via Segmentation For Small Datasets [52.821178654631254]
This paper presents CvS, a cost-effective classifier for small datasets that derives the classification labels from predicting the segmentation maps.
We evaluate the effectiveness of our framework on diverse problems showing that CvS is able to achieve much higher classification results compared to previous methods when given only a handful of examples.
arXiv Detail & Related papers (2021-10-29T18:41:15Z) - Co-Correcting: Noise-tolerant Medical Image Classification via mutual
Label Correction [5.994566233473544]
This paper proposes a noise-tolerant medical image classification framework named Co-Correcting.
It significantly improves classification accuracy and obtains more accurate labels through dual-network mutual learning, label probability estimation, and curriculum label correcting.
Experiments show that Co-Correcting achieves the best accuracy and generalization under different noise ratios in various tasks.
arXiv Detail & Related papers (2021-09-11T02:09:52Z) - Noisy Label Learning for Large-scale Medical Image Classification [37.79118840129632]
We adapt a state-of-the-art noisy-label multi-class training approach to learn a multi-label classifier for the dataset Chest X-ray14.
We show that the majority of label noise on Chest X-ray14 is present in the class 'No Finding', which is intuitively correct because this is the most likely class to contain one or more of the 14 diseases due to labelling mistakes.
arXiv Detail & Related papers (2021-03-06T07:42:36Z) - Improving Medical Image Classification with Label Noise Using
Dual-uncertainty Estimation [72.0276067144762]
We discuss and define the two common types of label noise in medical images.
We propose an uncertainty estimation-based framework to handle these two label noise amid the medical image classification task.
arXiv Detail & Related papers (2021-02-28T14:56:45Z) - Matching the Clinical Reality: Accurate OCT-Based Diagnosis From Few
Labels [2.891413712995642]
Unlabeled data is often abundant in the clinic, making machine learning methods based on semi-supervised learning a good match for this setting.
Recently proposed MixMatch and FixMatch algorithms have demonstrated promising results in extracting useful representations.
We find that both algorithms outperform the transfer learning baseline on all fractions of labelled data.
arXiv Detail & Related papers (2020-10-23T11:47:28Z) - Attention-Aware Noisy Label Learning for Image Classification [97.26664962498887]
Deep convolutional neural networks (CNNs) learned on large-scale labeled samples have achieved remarkable progress in computer vision.
The cheapest way to obtain a large body of labeled visual data is to crawl from websites with user-supplied labels, such as Flickr.
This paper proposes the attention-aware noisy label learning approach to improve the discriminative capability of the network trained on datasets with potential label noise.
arXiv Detail & Related papers (2020-09-30T15:45:36Z) - Learning with Out-of-Distribution Data for Audio Classification [60.48251022280506]
We show that detecting and relabelling certain OOD instances, rather than discarding them, can have a positive effect on learning.
The proposed method is shown to improve the performance of convolutional neural networks by a significant margin.
arXiv Detail & Related papers (2020-02-11T21:08:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.