A Survey of the Impact of Self-Supervised Pretraining for Diagnostic
Tasks with Radiological Images
- URL: http://arxiv.org/abs/2309.02555v1
- Date: Tue, 5 Sep 2023 19:45:09 GMT
- Title: A Survey of the Impact of Self-Supervised Pretraining for Diagnostic
Tasks with Radiological Images
- Authors: Blake VanBerlo, Jesse Hoey, Alexander Wong
- Abstract summary: Self-supervised pretraining has been observed to be effective at improving feature representations for transfer learning.
This review summarizes recent research into its usage in X-ray, computed tomography, magnetic resonance, and ultrasound imaging.
- Score: 71.26717896083433
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised pretraining has been observed to be effective at improving
feature representations for transfer learning, leveraging large amounts of
unlabelled data. This review summarizes recent research into its usage in
X-ray, computed tomography, magnetic resonance, and ultrasound imaging,
concentrating on studies that compare self-supervised pretraining to fully
supervised learning for diagnostic tasks such as classification and
segmentation. The most pertinent finding is that self-supervised pretraining
generally improves downstream task performance compared to full supervision,
most prominently when unlabelled examples greatly outnumber labelled examples.
Based on the aggregate evidence, recommendations are provided for practitioners
considering using self-supervised learning. Motivated by limitations identified
in current research, directions and practices for future study are suggested,
such as integrating clinical knowledge with theoretically justified
self-supervised learning methods, evaluating on public datasets, growing the
modest body of evidence for ultrasound, and characterizing the impact of
self-supervised pretraining on generalization.
Related papers
- Multi-organ Self-supervised Contrastive Learning for Breast Lesion
Segmentation [0.0]
This paper employs multi-organ datasets for pre-training models tailored to specific organ-related target tasks.
Our target task is breast tumour segmentation in ultrasound images.
Results show that conventional contrastive learning pre-training improves performance compared to supervised baseline approaches.
arXiv Detail & Related papers (2024-02-21T20:29:21Z) - Evaluating the Fairness of the MIMIC-IV Dataset and a Baseline
Algorithm: Application to the ICU Length of Stay Prediction [65.268245109828]
This paper uses the MIMIC-IV dataset to examine the fairness and bias in an XGBoost binary classification model predicting the ICU length of stay.
The research reveals class imbalances in the dataset across demographic attributes and employs data preprocessing and feature extraction.
The paper concludes with recommendations for fairness-aware machine learning techniques for mitigating biases and the need for collaborative efforts among healthcare professionals and data scientists.
arXiv Detail & Related papers (2023-12-31T16:01:48Z) - Exploring the Utility of Self-Supervised Pretraining Strategies for the
Detection of Absent Lung Sliding in M-Mode Lung Ultrasound [72.39040113126462]
Self-supervised pretraining has been observed to improve performance in supervised learning tasks in medical imaging.
This study investigates the utility of self-supervised pretraining prior to supervised conducting fine-tuning for the downstream task of lung sliding classification in M-mode lung ultrasound images.
arXiv Detail & Related papers (2023-04-05T20:01:59Z) - On the Robustness of Pretraining and Self-Supervision for a Deep
Learning-based Analysis of Diabetic Retinopathy [70.71457102672545]
We compare the impact of different training procedures for diabetic retinopathy grading.
We investigate different aspects such as quantitative performance, statistics of the learned feature representations, interpretability and robustness to image distortions.
Our results indicate that models from ImageNet pretraining report a significant increase in performance, generalization and robustness to image distortions.
arXiv Detail & Related papers (2021-06-25T08:32:45Z) - Evaluating the Robustness of Self-Supervised Learning in Medical Imaging [57.20012795524752]
Self-supervision has demonstrated to be an effective learning strategy when training target tasks on small annotated data-sets.
We show that networks trained via self-supervised learning have superior robustness and generalizability compared to fully-supervised learning in the context of medical imaging.
arXiv Detail & Related papers (2021-05-14T17:49:52Z) - Self-supervised driven consistency training for annotation efficient
histopathology image analysis [13.005873872821066]
Training a neural network with a large labeled dataset is still a dominant paradigm in computational histopathology.
We propose a self-supervised pretext task that harnesses the underlying multi-resolution contextual cues in histology whole-slide images to learn a powerful supervisory signal for unsupervised representation learning.
We also propose a new teacher-student semi-supervised consistency paradigm that learns to effectively transfer the pretrained representations to downstream tasks based on prediction consistency with the task-specific un-labeled data.
arXiv Detail & Related papers (2021-02-07T19:46:21Z) - Improving colonoscopy lesion classification using semi-supervised deep
learning [2.568264809297699]
Recent work in semi-supervised learning has shown that meaningful representations of images can be obtained from training with large quantities of unlabeled data.
We demonstrate that an unsupervised jigsaw learning task, in combination with supervised training, results in up to a 9.8% improvement in correctly classifying lesions in colonoscopy images.
arXiv Detail & Related papers (2020-09-07T15:25:35Z) - Self-Training with Improved Regularization for Sample-Efficient Chest
X-Ray Classification [80.00316465793702]
We present a deep learning framework that enables robust modeling in challenging scenarios.
Our results show that using 85% lesser labeled data, we can build predictive models that match the performance of classifiers trained in a large-scale data setting.
arXiv Detail & Related papers (2020-05-03T02:36:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.