Exploring the Utility of Self-Supervised Pretraining Strategies for the
Detection of Absent Lung Sliding in M-Mode Lung Ultrasound
- URL: http://arxiv.org/abs/2304.02724v1
- Date: Wed, 5 Apr 2023 20:01:59 GMT
- Title: Exploring the Utility of Self-Supervised Pretraining Strategies for the
Detection of Absent Lung Sliding in M-Mode Lung Ultrasound
- Authors: Blake VanBerlo, Brian Li, Alexander Wong, Jesse Hoey, Robert Arntfield
- Abstract summary: Self-supervised pretraining has been observed to improve performance in supervised learning tasks in medical imaging.
This study investigates the utility of self-supervised pretraining prior to supervised conducting fine-tuning for the downstream task of lung sliding classification in M-mode lung ultrasound images.
- Score: 72.39040113126462
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised pretraining has been observed to improve performance in
supervised learning tasks in medical imaging. This study investigates the
utility of self-supervised pretraining prior to conducting supervised
fine-tuning for the downstream task of lung sliding classification in M-mode
lung ultrasound images. We propose a novel pairwise relationship that couples
M-mode images constructed from the same B-mode image and investigate the
utility of data augmentation procedure specific to M-mode lung ultrasound. The
results indicate that self-supervised pretraining yields better performance
than full supervision, most notably for feature extractors not initialized with
ImageNet-pretrained weights. Moreover, we observe that including a vast volume
of unlabelled data results in improved performance on external validation
datasets, underscoring the value of self-supervision for improving
generalizability in automatic ultrasound interpretation. To the authors' best
knowledge, this study is the first to characterize the influence of
self-supervised pretraining for M-mode ultrasound.
Related papers
- Multi-organ Self-supervised Contrastive Learning for Breast Lesion
Segmentation [0.0]
This paper employs multi-organ datasets for pre-training models tailored to specific organ-related target tasks.
Our target task is breast tumour segmentation in ultrasound images.
Results show that conventional contrastive learning pre-training improves performance compared to supervised baseline approaches.
arXiv Detail & Related papers (2024-02-21T20:29:21Z) - Self-Supervised Pretraining Improves Performance and Inference
Efficiency in Multiple Lung Ultrasound Interpretation Tasks [65.23740556896654]
We investigated whether self-supervised pretraining could produce a neural network feature extractor applicable to multiple classification tasks in lung ultrasound analysis.
When fine-tuning on three lung ultrasound tasks, pretrained models resulted in an improvement of the average across-task area under the receiver operating curve (AUC) by 0.032 and 0.061 on local and external test sets respectively.
arXiv Detail & Related papers (2023-09-05T21:36:42Z) - A Survey of the Impact of Self-Supervised Pretraining for Diagnostic
Tasks with Radiological Images [71.26717896083433]
Self-supervised pretraining has been observed to be effective at improving feature representations for transfer learning.
This review summarizes recent research into its usage in X-ray, computed tomography, magnetic resonance, and ultrasound imaging.
arXiv Detail & Related papers (2023-09-05T19:45:09Z) - Revisiting the Transferability of Supervised Pretraining: an MLP
Perspective [78.51258076624046]
Recent progress on unsupervised pretraining methods shows superior transfer performance to their supervised counterparts.
This paper sheds new light on understanding the transferability gap between unsupervised and supervised pretraining from a multilayer perceptron (MLP) perspective.
We reveal that the projector is also the key factor to better transferability of unsupervised pretraining methods than supervised pretraining methods.
arXiv Detail & Related papers (2021-12-01T13:47:30Z) - How Transferable Are Self-supervised Features in Medical Image
Classification Tasks? [0.7734726150561086]
Transfer learning has become a standard practice to mitigate the lack of labeled data in medical classification tasks.
Self-supervised pretrained models yield richer embeddings than their supervised counterpart.
Dynamic Visual Meta-Embedding (DVME) is an end-to-end transfer learning approach that fuses pretrained embeddings from multiple models.
arXiv Detail & Related papers (2021-08-23T10:39:31Z) - On the Robustness of Pretraining and Self-Supervision for a Deep
Learning-based Analysis of Diabetic Retinopathy [70.71457102672545]
We compare the impact of different training procedures for diabetic retinopathy grading.
We investigate different aspects such as quantitative performance, statistics of the learned feature representations, interpretability and robustness to image distortions.
Our results indicate that models from ImageNet pretraining report a significant increase in performance, generalization and robustness to image distortions.
arXiv Detail & Related papers (2021-06-25T08:32:45Z) - Evaluating the Robustness of Self-Supervised Learning in Medical Imaging [57.20012795524752]
Self-supervision has demonstrated to be an effective learning strategy when training target tasks on small annotated data-sets.
We show that networks trained via self-supervised learning have superior robustness and generalizability compared to fully-supervised learning in the context of medical imaging.
arXiv Detail & Related papers (2021-05-14T17:49:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.