Critical Assessment of Transfer Learning for Medical Image Segmentation
with Fully Convolutional Neural Networks
- URL: http://arxiv.org/abs/2006.00356v2
- Date: Sun, 3 Apr 2022 16:45:27 GMT
- Title: Critical Assessment of Transfer Learning for Medical Image Segmentation
with Fully Convolutional Neural Networks
- Authors: Davood Karimi, Simon K. Warfield, Ali Gholipour
- Abstract summary: We study the role of transfer learning for training fully convolutional networks (FCNs) for medical image segmentation.
Although transfer learning reduces the training time on the target task, the improvement in segmentation accuracy is highly task/data-dependent.
- Score: 8.526949616891283
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Transfer learning is widely used for training machine learning models. Here,
we study the role of transfer learning for training fully convolutional
networks (FCNs) for medical image segmentation. Our experiments show that
although transfer learning reduces the training time on the target task, the
improvement in segmentation accuracy is highly task/data-dependent. Larger
improvements in accuracy are observed when the segmentation task is more
challenging and the target training data is smaller. We observe that
convolutional filters of an FCN change little during training for medical image
segmentation, and still look random at convergence. We further show that quite
accurate FCNs can be built by freezing the encoder section of the network at
random values and only training the decoder section. At least for medical image
segmentation, this finding challenges the common belief that the encoder
section needs to learn data/task-specific representations. We examine the
evolution of FCN representations to gain a better insight into the effects of
transfer learning on the training dynamics. Our analysis shows that although
FCNs trained via transfer learning learn different representations than FCNs
trained with random initialization, the variability among FCNs trained via
transfer learning can be as high as that among FCNs trained with random
initialization. Moreover, feature reuse is not restricted to the early encoder
layers; rather, it can be more significant in deeper layers. These findings
offer new insights and suggest alternative ways of training FCNs for medical
image segmentation.
Related papers
- Enhancing pretraining efficiency for medical image segmentation via transferability metrics [0.0]
In medical image segmentation tasks, the scarcity of labeled training data poses a significant challenge.
We introduce a novel transferability metric, based on contrastive learning, that measures how robustly a pretrained model is able to represent the target data.
arXiv Detail & Related papers (2024-10-24T12:11:52Z) - Transfer learning from a sparsely annotated dataset of 3D medical images [4.477071833136902]
This study explores the use of transfer learning to improve the performance of deep convolutional neural networks for organ segmentation in medical imaging.
A base segmentation model was trained on a large and sparsely annotated dataset; its weights were used for transfer learning on four new down-stream segmentation tasks.
The results showed that transfer learning from the base model was beneficial when small datasets were available.
arXiv Detail & Related papers (2023-11-08T21:31:02Z) - Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - Interaction of a priori Anatomic Knowledge with Self-Supervised
Contrastive Learning in Cardiac Magnetic Resonance Imaging [0.7387261884863349]
Self-supervised contrastive learning has been shown to boost performance in several medical imaging tasks.
In this work, we evaluate the optimal method of incorporating prior knowledge of anatomy into a SSCL training paradigm.
We find that using a priori knowledge of anatomy can greatly improve the downstream diagnostic performance.
arXiv Detail & Related papers (2022-05-25T01:33:37Z) - When Accuracy Meets Privacy: Two-Stage Federated Transfer Learning
Framework in Classification of Medical Images on Limited Data: A COVID-19
Case Study [77.34726150561087]
COVID-19 pandemic has spread rapidly and caused a shortage of global medical resources.
CNN has been widely utilized and verified in analyzing medical images.
arXiv Detail & Related papers (2022-03-24T02:09:41Z) - Learning Invariant Representations across Domains and Tasks [81.30046935430791]
We propose a novel Task Adaptation Network (TAN) to solve this unsupervised task transfer problem.
In addition to learning transferable features via domain-adversarial training, we propose a novel task semantic adaptor that uses the learning-to-learn strategy to adapt the task semantics.
TAN significantly increases the recall and F1 score by 5.0% and 7.8% compared to recently strong baselines.
arXiv Detail & Related papers (2021-03-03T11:18:43Z) - Self-Supervised Learning for Segmentation [3.8026993716513933]
The anatomical asymmetry of kidneys is leveraged to define an effective proxy task for kidney segmentation via self-supervised learning.
A siamese convolutional neural network (CNN) is used to classify a given pair of kidney sections from CT volumes as being kidneys of the same or different sides.
arXiv Detail & Related papers (2021-01-14T04:28:47Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z) - 3D medical image segmentation with labeled and unlabeled data using
autoencoders at the example of liver segmentation in CT images [58.720142291102135]
This work investigates the potential of autoencoder-extracted features to improve segmentation with a convolutional neural network.
A convolutional autoencoder was used to extract features from unlabeled data and a multi-scale, fully convolutional CNN was used to perform the target task of 3D liver segmentation in CT images.
arXiv Detail & Related papers (2020-03-17T20:20:43Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.