Self-Supervised Pretraining Improves Performance and Inference
Efficiency in Multiple Lung Ultrasound Interpretation Tasks
- URL: http://arxiv.org/abs/2309.02596v1
- Date: Tue, 5 Sep 2023 21:36:42 GMT
- Title: Self-Supervised Pretraining Improves Performance and Inference
Efficiency in Multiple Lung Ultrasound Interpretation Tasks
- Authors: Blake VanBerlo, Brian Li, Jesse Hoey, Alexander Wong
- Abstract summary: We investigated whether self-supervised pretraining could produce a neural network feature extractor applicable to multiple classification tasks in lung ultrasound analysis.
When fine-tuning on three lung ultrasound tasks, pretrained models resulted in an improvement of the average across-task area under the receiver operating curve (AUC) by 0.032 and 0.061 on local and external test sets respectively.
- Score: 65.23740556896654
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this study, we investigated whether self-supervised pretraining could
produce a neural network feature extractor applicable to multiple
classification tasks in B-mode lung ultrasound analysis. When fine-tuning on
three lung ultrasound tasks, pretrained models resulted in an improvement of
the average across-task area under the receiver operating curve (AUC) by 0.032
and 0.061 on local and external test sets respectively. Compact nonlinear
classifiers trained on features outputted by a single pretrained model did not
improve performance across all tasks; however, they did reduce inference time
by 49% compared to serial execution of separate fine-tuned models. When
training using 1% of the available labels, pretrained models consistently
outperformed fully supervised models, with a maximum observed test AUC increase
of 0.396 for the task of view classification. Overall, the results indicate
that self-supervised pretraining is useful for producing initial weights for
lung ultrasound classifiers.
Related papers
- Efficient Continual Pre-training by Mitigating the Stability Gap [68.49269649759005]
We study the behavior of Large Language Models (LLMs) during continual pre-training.
We propose three effective strategies to enhance LLM performance within a fixed compute budget.
Our strategies improve the average medical task performance of the OpenLlama-3B model from 36.2% to 40.7% with only 40% of the original training budget.
arXiv Detail & Related papers (2024-06-21T02:28:37Z) - Learning with Noisy Foundation Models [95.50968225050012]
This paper is the first work to comprehensively understand and analyze the nature of noise in pre-training datasets.
We propose a tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise and improve generalization.
arXiv Detail & Related papers (2024-03-11T16:22:41Z) - Pre-trained Model Guided Fine-Tuning for Zero-Shot Adversarial Robustness [52.9493817508055]
We propose Pre-trained Model Guided Adversarial Fine-Tuning (PMG-AFT) to enhance the model's zero-shot adversarial robustness.
Our approach consistently improves clean accuracy by an average of 8.72%.
arXiv Detail & Related papers (2024-01-09T04:33:03Z) - Understanding and Mitigating the Label Noise in Pre-training on
Downstream Tasks [91.15120211190519]
This paper aims to understand the nature of noise in pre-training datasets and to mitigate its impact on downstream tasks.
We propose a light-weight black-box tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise.
arXiv Detail & Related papers (2023-09-29T06:18:15Z) - Active Learning Guided Fine-Tuning for enhancing Self-Supervised Based
Multi-Label Classification of Remote Sensing Images [0.0]
Self-supervised pre-training combined with fine-tuning on a randomly selected small training set has become a popular approach to minimize annotation efforts.
We investigate the effectiveness of the joint use of self-supervised pre-training with active learning (AL)
Experimental results show the effectiveness of applying AL-guided fine-tuning compared to the application of fine-tuning using a randomly constructed small training set.
arXiv Detail & Related papers (2023-06-12T07:26:21Z) - Exploring the Utility of Self-Supervised Pretraining Strategies for the
Detection of Absent Lung Sliding in M-Mode Lung Ultrasound [72.39040113126462]
Self-supervised pretraining has been observed to improve performance in supervised learning tasks in medical imaging.
This study investigates the utility of self-supervised pretraining prior to supervised conducting fine-tuning for the downstream task of lung sliding classification in M-mode lung ultrasound images.
arXiv Detail & Related papers (2023-04-05T20:01:59Z) - Dataset Pruning: Reducing Training Data by Examining Generalization
Influence [30.30255670341501]
Do all training data contribute to model's performance?
How to construct a smallest subset from the entire training data as a proxy training set without significantly sacrificing the model's performance?
arXiv Detail & Related papers (2022-05-19T05:36:35Z) - How Transferable Are Self-supervised Features in Medical Image
Classification Tasks? [0.7734726150561086]
Transfer learning has become a standard practice to mitigate the lack of labeled data in medical classification tasks.
Self-supervised pretrained models yield richer embeddings than their supervised counterpart.
Dynamic Visual Meta-Embedding (DVME) is an end-to-end transfer learning approach that fuses pretrained embeddings from multiple models.
arXiv Detail & Related papers (2021-08-23T10:39:31Z) - An Evaluation of Self-Supervised Pre-Training for Skin-Lesion Analysis [14.466964262040136]
Self-supervised pre-training appears as an advantageous alternative to supervised pre-trained for transfer learning.
By synthesizing annotations on pretext tasks, self-supervision allows to pre-train models on large amounts of pseudo-labels before fine-tuning them on the target task.
arXiv Detail & Related papers (2021-06-17T03:47:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.