How does self-supervised pretraining improve robustness against noisy
labels across various medical image classification datasets?
- URL: http://arxiv.org/abs/2401.07990v1
- Date: Mon, 15 Jan 2024 22:29:23 GMT
- Title: How does self-supervised pretraining improve robustness against noisy
labels across various medical image classification datasets?
- Authors: Bidur Khanal, Binod Bhattarai, Bishesh Khanal, Cristian Linte
- Abstract summary: Noisy labels can significantly impact medical image classification, particularly in deep learning.
Self-supervised pretraining, which doesn't rely on labeled data, can enhance robustness against noisy labels.
Our results show that DermNet, among five datasets, is the most challenging but exhibits greater robustness against noisy labels.
- Score: 9.371321044764624
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Noisy labels can significantly impact medical image classification,
particularly in deep learning, by corrupting learned features. Self-supervised
pretraining, which doesn't rely on labeled data, can enhance robustness against
noisy labels. However, this robustness varies based on factors like the number
of classes, dataset complexity, and training size. In medical images, subtle
inter-class differences and modality-specific characteristics add complexity.
Previous research hasn't comprehensively explored the interplay between
self-supervised learning and robustness against noisy labels in medical image
classification, considering all these factors. In this study, we address three
key questions: i) How does label noise impact various medical image
classification datasets? ii) Which types of medical image datasets are more
challenging to learn and more affected by label noise? iii) How do different
self-supervised pretraining methods enhance robustness across various medical
image datasets? Our results show that DermNet, among five datasets (Fetal
plane, DermNet, COVID-DU-Ex, MURA, NCT-CRC-HE-100K), is the most challenging
but exhibits greater robustness against noisy labels. Additionally, contrastive
learning stands out among the eight self-supervised methods as the most
effective approach to enhance robustness against noisy labels.
Related papers
- Self-Relaxed Joint Training: Sample Selection for Severity Estimation with Ordinal Noisy Labels [5.892066196730197]
We propose a new framework for training with ordinal'' noisy labels.
Our framework uses two techniques: clean sample selection and dual-network architecture.
By appropriately using the soft and hard labels in the two techniques, we achieve more accurate sample selection and robust network training.
arXiv Detail & Related papers (2024-10-29T09:23:09Z) - A noisy elephant in the room: Is your out-of-distribution detector robust to label noise? [49.88894124047644]
We take a closer look at 20 state-of-the-art OOD detection methods.
We show that poor separation between incorrectly classified ID samples vs. OOD samples is an overlooked yet important limitation of existing methods.
arXiv Detail & Related papers (2024-04-02T09:40:22Z) - Investigating the Robustness of Vision Transformers against Label Noise
in Medical Image Classification [8.578500152567164]
Label noise in medical image classification datasets hampers the training of supervised deep learning methods.
We show that pretraining is crucial for ensuring ViT's improved robustness against label noise in supervised training.
arXiv Detail & Related papers (2024-02-26T16:53:23Z) - Improving Medical Image Classification in Noisy Labels Using Only
Self-supervised Pretraining [9.01547574908261]
Noisy labels hurt deep learning-based supervised image classification performance as the models may overfit the noise and learn corrupted feature extractors.
In this work, we explore contrastive and pretext task-based self-supervised pretraining to initialize the weights of a deep learning classification model for two medical datasets with self-induced noisy labels.
Our results show that models with pretrained weights obtained from self-supervised learning can effectively learn better features and improve robustness against noisy labels.
arXiv Detail & Related papers (2023-08-08T19:45:06Z) - Robust Medical Image Classification from Noisy Labeled Data with Global
and Local Representation Guided Co-training [73.60883490436956]
We propose a novel collaborative training paradigm with global and local representation learning for robust medical image classification.
We employ the self-ensemble model with a noisy label filter to efficiently select the clean and noisy samples.
We also design a novel global and local representation learning scheme to implicitly regularize the networks to utilize noisy samples.
arXiv Detail & Related papers (2022-05-10T07:50:08Z) - Learning to Aggregate and Refine Noisy Labels for Visual Sentiment
Analysis [69.48582264712854]
We propose a robust learning method to perform robust visual sentiment analysis.
Our method relies on an external memory to aggregate and filter noisy labels during training.
We establish a benchmark for visual sentiment analysis with label noise using publicly available datasets.
arXiv Detail & Related papers (2021-09-15T18:18:28Z) - Co-Correcting: Noise-tolerant Medical Image Classification via mutual
Label Correction [5.994566233473544]
This paper proposes a noise-tolerant medical image classification framework named Co-Correcting.
It significantly improves classification accuracy and obtains more accurate labels through dual-network mutual learning, label probability estimation, and curriculum label correcting.
Experiments show that Co-Correcting achieves the best accuracy and generalization under different noise ratios in various tasks.
arXiv Detail & Related papers (2021-09-11T02:09:52Z) - Improving Medical Image Classification with Label Noise Using
Dual-uncertainty Estimation [72.0276067144762]
We discuss and define the two common types of label noise in medical images.
We propose an uncertainty estimation-based framework to handle these two label noise amid the medical image classification task.
arXiv Detail & Related papers (2021-02-28T14:56:45Z) - Attention-Aware Noisy Label Learning for Image Classification [97.26664962498887]
Deep convolutional neural networks (CNNs) learned on large-scale labeled samples have achieved remarkable progress in computer vision.
The cheapest way to obtain a large body of labeled visual data is to crawl from websites with user-supplied labels, such as Flickr.
This paper proposes the attention-aware noisy label learning approach to improve the discriminative capability of the network trained on datasets with potential label noise.
arXiv Detail & Related papers (2020-09-30T15:45:36Z) - Multi-label Thoracic Disease Image Classification with Cross-Attention
Networks [65.37531731899837]
We propose a novel scheme of Cross-Attention Networks (CAN) for automated thoracic disease classification from chest x-ray images.
We also design a new loss function that beyond cross-entropy loss to help cross-attention process and is able to overcome the imbalance between classes and easy-dominated samples within each class.
arXiv Detail & Related papers (2020-07-21T14:37:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.