Self-supervision for medical image classification: state-of-the-art
performance with ~100 labeled training samples per class
- URL: http://arxiv.org/abs/2304.05163v2
- Date: Wed, 6 Sep 2023 13:57:03 GMT
- Title: Self-supervision for medical image classification: state-of-the-art
performance with ~100 labeled training samples per class
- Authors: Maximilian Nielsen, Laura Wenderoth, Thilo Sentker, Ren\'e Werner
- Abstract summary: We analyze the performance of self-supervised DL within the self-distillation with no labels (DINO) framework.
We achieve state-of-the-art classification performance for all three imaging modalities and data sets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Is self-supervised deep learning (DL) for medical image analysis already a
serious alternative to the de facto standard of end-to-end trained supervised
DL? We tackle this question for medical image classification, with a particular
focus on one of the currently most limiting factors of the field: the
(non-)availability of labeled data. Based on three common medical imaging
modalities (bone marrow microscopy, gastrointestinal endoscopy, dermoscopy) and
publicly available data sets, we analyze the performance of self-supervised DL
within the self-distillation with no labels (DINO) framework. After learning an
image representation without use of image labels, conventional machine learning
classifiers are applied. The classifiers are fit using a systematically varied
number of labeled data (1-1000 samples per class). Exploiting the learned image
representation, we achieve state-of-the-art classification performance for all
three imaging modalities and data sets with only a fraction of between 1% and
10% of the available labeled data and about 100 labeled samples per class.
Related papers
- FBA-Net: Foreground and Background Aware Contrastive Learning for
Semi-Supervised Atrium Segmentation [10.11072886547561]
We propose a contrastive learning strategy of foreground and background representations for semi-supervised 3D medical image segmentation.
Our framework has the potential to advance the field of semi-supervised 3D medical image segmentation.
arXiv Detail & Related papers (2023-06-27T04:14:50Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Classification of Luminal Subtypes in Full Mammogram Images Using
Transfer Learning [8.961271420114794]
Transfer learning is applied from a breast abnormality classification task to finetune a ResNet-18-based luminal versus non-luminal subtype classification task.
We show that our approach significantly outperforms the baseline classifier by achieving a mean AUC score of 0.6688 and a mean F1 score of 0.6693 on the test dataset.
arXiv Detail & Related papers (2023-01-23T05:58:26Z) - Mine yOur owN Anatomy: Revisiting Medical Image Segmentation with Extremely Limited Labels [54.58539616385138]
We introduce a novel semi-supervised 2D medical image segmentation framework termed Mine yOur owN Anatomy (MONA)
First, prior work argues that every pixel equally matters to the model training; we observe empirically that this alone is unlikely to define meaningful anatomical features.
Second, we construct a set of objectives that encourage the model to be capable of decomposing medical images into a collection of anatomical features.
arXiv Detail & Related papers (2022-09-27T15:50:31Z) - Self-Supervised Learning as a Means To Reduce the Need for Labeled Data
in Medical Image Analysis [64.4093648042484]
We use a dataset of chest X-ray images with bounding box labels for 13 different classes of anomalies.
We show that it is possible to achieve similar performance to a fully supervised model in terms of mean average precision and accuracy with only 60% of the labeled data.
arXiv Detail & Related papers (2022-06-01T09:20:30Z) - Weakly-supervised Generative Adversarial Networks for medical image
classification [1.479639149658596]
We propose a novel medical image classification algorithm called Weakly-Supervised Generative Adversarial Networks (WSGAN)
WSGAN only uses a small number of real images without labels to generate fake images or mask images to enlarge the sample size of the training set.
We show that WSGAN can obtain relatively high learning performance by using few labeled and unlabeled data.
arXiv Detail & Related papers (2021-11-29T15:38:48Z) - Self-Paced Contrastive Learning for Semi-supervisedMedical Image
Segmentation with Meta-labels [6.349708371894538]
We propose to adapt contrastive learning to work with meta-label annotations.
We use the meta-labels for pre-training the image encoder as well as to regularize a semi-supervised training.
Results on three different medical image segmentation datasets show that our approach highly boosts the performance of a model trained on a few scans.
arXiv Detail & Related papers (2021-07-29T04:30:46Z) - Malignancy Prediction and Lesion Identification from Clinical
Dermatological Images [65.1629311281062]
We consider machine-learning-based malignancy prediction and lesion identification from clinical dermatological images.
We first identify all lesions present in the image regardless of sub-type or likelihood of malignancy, then it estimates their likelihood of malignancy, and through aggregation, it also generates an image-level likelihood of malignancy.
arXiv Detail & Related papers (2021-04-02T20:52:05Z) - Multi-label Thoracic Disease Image Classification with Cross-Attention
Networks [65.37531731899837]
We propose a novel scheme of Cross-Attention Networks (CAN) for automated thoracic disease classification from chest x-ray images.
We also design a new loss function that beyond cross-entropy loss to help cross-attention process and is able to overcome the imbalance between classes and easy-dominated samples within each class.
arXiv Detail & Related papers (2020-07-21T14:37:00Z) - Semi-supervised Medical Image Classification with Relation-driven
Self-ensembling Model [71.80319052891817]
We present a relation-driven semi-supervised framework for medical image classification.
It exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations.
Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
arXiv Detail & Related papers (2020-05-15T06:57:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.