Investigating the Robustness of Vision Transformers against Label Noise
in Medical Image Classification
- URL: http://arxiv.org/abs/2402.16734v1
- Date: Mon, 26 Feb 2024 16:53:23 GMT
- Title: Investigating the Robustness of Vision Transformers against Label Noise
in Medical Image Classification
- Authors: Bidur Khanal, Prashant Shrestha, Sanskar Amgain, Bishesh Khanal, Binod
Bhattarai, Cristian A. Linte
- Abstract summary: Label noise in medical image classification datasets hampers the training of supervised deep learning methods.
We show that pretraining is crucial for ensuring ViT's improved robustness against label noise in supervised training.
- Score: 8.578500152567164
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Label noise in medical image classification datasets significantly hampers
the training of supervised deep learning methods, undermining their
generalizability. The test performance of a model tends to decrease as the
label noise rate increases. Over recent years, several methods have been
proposed to mitigate the impact of label noise in medical image classification
and enhance the robustness of the model. Predominantly, these works have
employed CNN-based architectures as the backbone of their classifiers for
feature extraction. However, in recent years, Vision Transformer (ViT)-based
backbones have replaced CNNs, demonstrating improved performance and a greater
ability to learn more generalizable features, especially when the dataset is
large. Nevertheless, no prior work has rigorously investigated how
transformer-based backbones handle the impact of label noise in medical image
classification. In this paper, we investigate the architectural robustness of
ViT against label noise and compare it to that of CNNs. We use two medical
image classification datasets -- COVID-DU-Ex, and NCT-CRC-HE-100K -- both
corrupted by injecting label noise at various rates. Additionally, we show that
pretraining is crucial for ensuring ViT's improved robustness against label
noise in supervised training.
Related papers
- GCI-ViTAL: Gradual Confidence Improvement with Vision Transformers for Active Learning on Label Noise [1.603727941931813]
This study focuses on image classification tasks, comparing AL methods on CIFAR10, CIFAR100, Food101, and the Chest X-ray datasets.
We propose a novel deep active learning algorithm, GCI-ViTAL, designed to be robust to label noise.
arXiv Detail & Related papers (2024-11-08T19:59:40Z) - Contrastive-Based Deep Embeddings for Label Noise-Resilient Histopathology Image Classification [0.0]
noisy labels represent a critical challenge in histopathology image classification.
Deep neural networks can easily overfit label noise, leading to severe degradations in model performance.
We exhibit the label noise resilience property of embeddings extracted from foundation models trained in a self-supervised contrastive manner.
arXiv Detail & Related papers (2024-04-11T09:47:52Z) - Improving Medical Image Classification in Noisy Labels Using Only
Self-supervised Pretraining [9.01547574908261]
Noisy labels hurt deep learning-based supervised image classification performance as the models may overfit the noise and learn corrupted feature extractors.
In this work, we explore contrastive and pretext task-based self-supervised pretraining to initialize the weights of a deep learning classification model for two medical datasets with self-induced noisy labels.
Our results show that models with pretrained weights obtained from self-supervised learning can effectively learn better features and improve robustness against noisy labels.
arXiv Detail & Related papers (2023-08-08T19:45:06Z) - Label-noise-tolerant medical image classification via self-attention and
self-supervised learning [5.6827706625306345]
We propose a noise-robust training approach to mitigate the adverse effects of noisy labels in medical image classification.
Specifically, we incorporate contrastive learning and intra-group attention mixup strategies into the vanilla supervised learning.
Rigorous experiments validate that our noise-robust method with contrastive learning and attention mixup can effectively handle with label noise.
arXiv Detail & Related papers (2023-06-16T09:37:16Z) - Data-Efficient Vision Transformers for Multi-Label Disease
Classification on Chest Radiographs [55.78588835407174]
Vision Transformers (ViTs) have not been applied to this task despite their high classification performance on generic images.
ViTs do not rely on convolutions but on patch-based self-attention and in contrast to CNNs, no prior knowledge of local connectivity is present.
Our results show that while the performance between ViTs and CNNs is on par with a small benefit for ViTs, DeiTs outperform the former if a reasonably large data set is available for training.
arXiv Detail & Related papers (2022-08-17T09:07:45Z) - Treatment Learning Causal Transformer for Noisy Image Classification [62.639851972495094]
In this work, we incorporate this binary information of "existence of noise" as treatment into image classification tasks to improve prediction accuracy.
Motivated from causal variational inference, we propose a transformer-based architecture, that uses a latent generative model to estimate robust feature representations for noise image classification.
We also create new noisy image datasets incorporating a wide range of noise factors for performance benchmarking.
arXiv Detail & Related papers (2022-03-29T13:07:53Z) - Vision Transformers for femur fracture classification [59.99241204074268]
The Vision Transformer (ViT) was able to correctly predict 83% of the test images.
Good results were obtained in sub-fractures with the largest and richest dataset ever.
arXiv Detail & Related papers (2021-08-07T10:12:42Z) - Improving Medical Image Classification with Label Noise Using
Dual-uncertainty Estimation [72.0276067144762]
We discuss and define the two common types of label noise in medical images.
We propose an uncertainty estimation-based framework to handle these two label noise amid the medical image classification task.
arXiv Detail & Related papers (2021-02-28T14:56:45Z) - Attention-Aware Noisy Label Learning for Image Classification [97.26664962498887]
Deep convolutional neural networks (CNNs) learned on large-scale labeled samples have achieved remarkable progress in computer vision.
The cheapest way to obtain a large body of labeled visual data is to crawl from websites with user-supplied labels, such as Flickr.
This paper proposes the attention-aware noisy label learning approach to improve the discriminative capability of the network trained on datasets with potential label noise.
arXiv Detail & Related papers (2020-09-30T15:45:36Z) - Rectified Meta-Learning from Noisy Labels for Robust Image-based Plant
Disease Diagnosis [64.82680813427054]
Plant diseases serve as one of main threats to food security and crop production.
One popular approach is to transform this problem as a leaf image classification task, which can be addressed by the powerful convolutional neural networks (CNNs)
We propose a novel framework that incorporates rectified meta-learning module into common CNN paradigm to train a noise-robust deep network without using extra supervision information.
arXiv Detail & Related papers (2020-03-17T09:51:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.