Transfer or Self-Supervised? Bridging the Performance Gap in Medical Imaging
- URL: http://arxiv.org/abs/2407.05592v2
- Date: Mon, 09 Dec 2024 13:07:01 GMT
- Title: Transfer or Self-Supervised? Bridging the Performance Gap in Medical Imaging
- Authors: Zehui Zhao, Laith Alzubaidi, Jinglan Zhang, Ye Duan, Usman Naseem, Yuantong Gu,
- Abstract summary: This paper compares the performance and robustness of transfer learning and self-supervised learning in the medical field.
We tested data with several common issues in medical domains, such as data imbalance, data scarcity, and domain mismatch.
We provide recommendations to help users apply transfer learning and self-supervised learning methods in medical areas.
- Score: 6.744847405966574
- License:
- Abstract: Recently, transfer learning and self-supervised learning have gained significant attention within the medical field due to their ability to mitigate the challenges posed by limited data availability, improve model generalisation, and reduce computational expenses. Transfer learning and self-supervised learning hold immense potential for advancing medical research. However, it is crucial to recognise that transfer learning and self-supervised learning architectures exhibit distinct advantages and limitations, manifesting variations in accuracy, training speed, and robustness. This paper compares the performance and robustness of transfer learning and self-supervised learning in the medical field. Specifically, we pre-trained two models using the same source domain datasets with different pre-training methods and evaluated them on small-sized medical datasets to identify the factors influencing their final performance. We tested data with several common issues in medical domains, such as data imbalance, data scarcity, and domain mismatch, through comparison experiments to understand their impact on specific pre-trained models. Finally, we provide recommendations to help users apply transfer learning and self-supervised learning methods in medical areas, and build more convenient and efficient deployment strategies.
Related papers
- Self-Supervised Learning for Pre-training Capsule Networks: Overcoming Medical Imaging Dataset Challenges [2.9248916859490173]
This study investigates self-supervised learning methods for pre-training capsule networks in polyp diagnostics for colon cancer.
We used the PICCOLO dataset, comprising 3,433 samples, which exemplifies typical challenges in medical datasets.
Our findings suggest contrastive learning and in-painting techniques are suitable auxiliary tasks for self-supervised learning in the medical domain.
arXiv Detail & Related papers (2025-02-07T08:32:26Z) - Transfer learning from a sparsely annotated dataset of 3D medical images [4.477071833136902]
This study explores the use of transfer learning to improve the performance of deep convolutional neural networks for organ segmentation in medical imaging.
A base segmentation model was trained on a large and sparsely annotated dataset; its weights were used for transfer learning on four new down-stream segmentation tasks.
The results showed that transfer learning from the base model was beneficial when small datasets were available.
arXiv Detail & Related papers (2023-11-08T21:31:02Z) - Understanding the Tricks of Deep Learning in Medical Image Segmentation:
Challenges and Future Directions [66.40971096248946]
In this paper, we collect a series of MedISeg tricks for different model implementation phases.
We experimentally explore the effectiveness of these tricks on consistent baselines.
We also open-sourced a strong MedISeg repository, where each component has the advantage of plug-and-play.
arXiv Detail & Related papers (2022-09-21T12:30:05Z) - Robust and Efficient Medical Imaging with Self-Supervision [80.62711706785834]
We present REMEDIS, a unified representation learning strategy to improve robustness and data-efficiency of medical imaging AI.
We study a diverse range of medical imaging tasks and simulate three realistic application scenarios using retrospective data.
arXiv Detail & Related papers (2022-05-19T17:34:18Z) - LifeLonger: A Benchmark for Continual Disease Classification [59.13735398630546]
We introduce LifeLonger, a benchmark for continual disease classification on the MedMNIST collection.
Task and class incremental learning of diseases address the issue of classifying new samples without re-training the models from scratch.
Cross-domain incremental learning addresses the issue of dealing with datasets originating from different institutions while retaining the previously obtained knowledge.
arXiv Detail & Related papers (2022-04-12T12:25:05Z) - When Accuracy Meets Privacy: Two-Stage Federated Transfer Learning
Framework in Classification of Medical Images on Limited Data: A COVID-19
Case Study [77.34726150561087]
COVID-19 pandemic has spread rapidly and caused a shortage of global medical resources.
CNN has been widely utilized and verified in analyzing medical images.
arXiv Detail & Related papers (2022-03-24T02:09:41Z) - On the Robustness of Pretraining and Self-Supervision for a Deep
Learning-based Analysis of Diabetic Retinopathy [70.71457102672545]
We compare the impact of different training procedures for diabetic retinopathy grading.
We investigate different aspects such as quantitative performance, statistics of the learned feature representations, interpretability and robustness to image distortions.
Our results indicate that models from ImageNet pretraining report a significant increase in performance, generalization and robustness to image distortions.
arXiv Detail & Related papers (2021-06-25T08:32:45Z) - Estimating and Improving Fairness with Adversarial Learning [65.99330614802388]
We propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system.
Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.
We evaluate our framework on a large-scale public-available skin lesion dataset.
arXiv Detail & Related papers (2021-03-07T03:10:32Z) - On the application of transfer learning in prognostics and health
management [0.0]
Data availability has encouraged researchers and industry practitioners to rely on data-based machine learning.
Deep learning, models for fault diagnostics and prognostics more than ever.
These models provide unique advantages, however, their performance is heavily dependent on the training data and how well that data represents the test data.
transfer learning is an approach that can remedy this issue by keeping portions of what is learned from previous training and transferring them to the new application.
arXiv Detail & Related papers (2020-07-03T23:35:18Z) - Adversarial Multi-Source Transfer Learning in Healthcare: Application to
Glucose Prediction for Diabetic People [4.17510581764131]
We propose a multi-source adversarial transfer learning framework that enables the learning of a feature representation that is similar across the sources.
We apply this idea to glucose forecasting for diabetic people using a fully convolutional neural network.
In particular, it shines when using data from different datasets, or when there is too little data in an intra-dataset situation.
arXiv Detail & Related papers (2020-06-29T11:17:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.