Label-noise-tolerant medical image classification via self-attention and
self-supervised learning
- URL: http://arxiv.org/abs/2306.09718v1
- Date: Fri, 16 Jun 2023 09:37:16 GMT
- Title: Label-noise-tolerant medical image classification via self-attention and
self-supervised learning
- Authors: Hongyang Jiang, Mengdi Gao, Yan Hu, Qiushi Ren, Zhaoheng Xie, Jiang
Liu
- Abstract summary: We propose a noise-robust training approach to mitigate the adverse effects of noisy labels in medical image classification.
Specifically, we incorporate contrastive learning and intra-group attention mixup strategies into the vanilla supervised learning.
Rigorous experiments validate that our noise-robust method with contrastive learning and attention mixup can effectively handle with label noise.
- Score: 5.6827706625306345
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks (DNNs) have been widely applied in medical image
classification and achieve remarkable classification performance. These
achievements heavily depend on large-scale accurately annotated training data.
However, label noise is inevitably introduced in the medical image annotation,
as the labeling process heavily relies on the expertise and experience of
annotators. Meanwhile, DNNs suffer from overfitting noisy labels, degrading the
performance of models. Therefore, in this work, we innovatively devise
noise-robust training approach to mitigate the adverse effects of noisy labels
in medical image classification. Specifically, we incorporate contrastive
learning and intra-group attention mixup strategies into the vanilla supervised
learning. The contrastive learning for feature extractor helps to enhance
visual representation of DNNs. The intra-group attention mixup module
constructs groups and assigns self-attention weights for group-wise samples,
and subsequently interpolates massive noisy-suppressed samples through weighted
mixup operation. We conduct comparative experiments on both synthetic and
real-world noisy medical datasets under various noise levels. Rigorous
experiments validate that our noise-robust method with contrastive learning and
attention mixup can effectively handle with label noise, and is superior to
state-of-the-art methods. An ablation study also shows that both components
contribute to boost model performance. The proposed method demonstrates its
capability of curb label noise and has certain potential toward real-world
clinic applications.
Related papers
- Active Label Refinement for Robust Training of Imbalanced Medical Image Classification Tasks in the Presence of High Label Noise [10.232537737211098]
We propose a two-phase approach that combines Learning with Noisy Labels (LNL) and active learning.
We demonstrate that our proposed technique is superior to its predecessors at handling class imbalance by not misidentifying clean samples from minority classes as mostly noisy samples.
arXiv Detail & Related papers (2024-07-08T14:16:05Z) - Contrastive-Based Deep Embeddings for Label Noise-Resilient Histopathology Image Classification [0.0]
noisy labels represent a critical challenge in histopathology image classification.
Deep neural networks can easily overfit label noise, leading to severe degradations in model performance.
We exhibit the label noise resilience property of embeddings extracted from foundation models trained in a self-supervised contrastive manner.
arXiv Detail & Related papers (2024-04-11T09:47:52Z) - Investigating the Robustness of Vision Transformers against Label Noise
in Medical Image Classification [8.578500152567164]
Label noise in medical image classification datasets hampers the training of supervised deep learning methods.
We show that pretraining is crucial for ensuring ViT's improved robustness against label noise in supervised training.
arXiv Detail & Related papers (2024-02-26T16:53:23Z) - Co-Learning Meets Stitch-Up for Noisy Multi-label Visual Recognition [70.00984078351927]
This paper focuses on reducing noise based on some inherent properties of multi-label classification and long-tailed learning under noisy cases.
We propose a Stitch-Up augmentation to synthesize a cleaner sample, which directly reduces multi-label noise.
A Heterogeneous Co-Learning framework is further designed to leverage the inconsistency between long-tailed and balanced distributions.
arXiv Detail & Related papers (2023-07-03T09:20:28Z) - Robust Medical Image Classification from Noisy Labeled Data with Global
and Local Representation Guided Co-training [73.60883490436956]
We propose a novel collaborative training paradigm with global and local representation learning for robust medical image classification.
We employ the self-ensemble model with a noisy label filter to efficiently select the clean and noisy samples.
We also design a novel global and local representation learning scheme to implicitly regularize the networks to utilize noisy samples.
arXiv Detail & Related papers (2022-05-10T07:50:08Z) - Treatment Learning Causal Transformer for Noisy Image Classification [62.639851972495094]
In this work, we incorporate this binary information of "existence of noise" as treatment into image classification tasks to improve prediction accuracy.
Motivated from causal variational inference, we propose a transformer-based architecture, that uses a latent generative model to estimate robust feature representations for noise image classification.
We also create new noisy image datasets incorporating a wide range of noise factors for performance benchmarking.
arXiv Detail & Related papers (2022-03-29T13:07:53Z) - Improving Medical Image Classification with Label Noise Using
Dual-uncertainty Estimation [72.0276067144762]
We discuss and define the two common types of label noise in medical images.
We propose an uncertainty estimation-based framework to handle these two label noise amid the medical image classification task.
arXiv Detail & Related papers (2021-02-28T14:56:45Z) - Attention-Aware Noisy Label Learning for Image Classification [97.26664962498887]
Deep convolutional neural networks (CNNs) learned on large-scale labeled samples have achieved remarkable progress in computer vision.
The cheapest way to obtain a large body of labeled visual data is to crawl from websites with user-supplied labels, such as Flickr.
This paper proposes the attention-aware noisy label learning approach to improve the discriminative capability of the network trained on datasets with potential label noise.
arXiv Detail & Related papers (2020-09-30T15:45:36Z) - Rectified Meta-Learning from Noisy Labels for Robust Image-based Plant
Disease Diagnosis [64.82680813427054]
Plant diseases serve as one of main threats to food security and crop production.
One popular approach is to transform this problem as a leaf image classification task, which can be addressed by the powerful convolutional neural networks (CNNs)
We propose a novel framework that incorporates rectified meta-learning module into common CNN paradigm to train a noise-robust deep network without using extra supervision information.
arXiv Detail & Related papers (2020-03-17T09:51:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.