Revisiting Hidden Representations in Transfer Learning for Medical
Imaging
- URL: http://arxiv.org/abs/2302.08272v3
- Date: Tue, 5 Dec 2023 12:18:54 GMT
- Title: Revisiting Hidden Representations in Transfer Learning for Medical
Imaging
- Authors: Dovile Juodelyte, Amelia Jim\'enez-S\'anchez, Veronika Cheplygina
- Abstract summary: We compare ImageNet and RadImageNet on seven medical classification tasks.
Our results indicate that, contrary to intuition, ImageNet and RadImageNet may converge to distinct intermediate representations.
Our findings show that the similarity between networks before and after fine-tuning does not correlate with performance gains.
- Score: 2.4545492329339815
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While a key component to the success of deep learning is the availability of
massive amounts of training data, medical image datasets are often limited in
diversity and size. Transfer learning has the potential to bridge the gap
between related yet different domains. For medical applications, however, it
remains unclear whether it is more beneficial to pre-train on natural or
medical images. We aim to shed light on this problem by comparing
initialization on ImageNet and RadImageNet on seven medical classification
tasks. Our work includes a replication study, which yields results contrary to
previously published findings. In our experiments, ResNet50 models pre-trained
on ImageNet tend to outperform those trained on RadImageNet. To gain further
insights, we investigate the learned representations using Canonical
Correlation Analysis (CCA) and compare the predictions of the different models.
Our results indicate that, contrary to intuition, ImageNet and RadImageNet may
converge to distinct intermediate representations, which appear to diverge
further during fine-tuning. Despite these distinct representations, the
predictions of the models remain similar. Our findings show that the similarity
between networks before and after fine-tuning does not correlate with
performance gains, suggesting that the advantages of transfer learning might
not solely originate from the reuse of features in the early layers of a
convolutional neural network.
Related papers
- Disease Classification and Impact of Pretrained Deep Convolution Neural Networks on Diverse Medical Imaging Datasets across Imaging Modalities [0.0]
This paper investigates the intricacies of using pretrained deep convolutional neural networks with transfer learning across diverse medical imaging datasets.
It shows that the use of pretrained models as fixed feature extractors yields poor performance irrespective of the datasets.
It is also found that deeper and more complex architectures did not necessarily result in the best performance.
arXiv Detail & Related papers (2024-08-30T04:51:19Z) - Source Matters: Source Dataset Impact on Model Robustness in Medical Imaging [14.250975981451914]
We show that ImageNet and RadImageNet achieve comparable classification performance.
ImageNet is much more prone to overfitting to confounders.
We recommend that researchers using ImageNet-pretrained models reexamine their model.
arXiv Detail & Related papers (2024-03-07T13:36:15Z) - Performance of GAN-based augmentation for deep learning COVID-19 image
classification [57.1795052451257]
The biggest challenge in the application of deep learning to the medical domain is the availability of training data.
Data augmentation is a typical methodology used in machine learning when confronted with a limited data set.
In this work, a StyleGAN2-ADA model of Generative Adversarial Networks is trained on the limited COVID-19 chest X-ray image set.
arXiv Detail & Related papers (2023-04-18T15:39:58Z) - Enhanced Transfer Learning Through Medical Imaging and Patient
Demographic Data Fusion [0.0]
We examine the performance enhancement in classification of medical imaging data when image features are combined with associated non-image data.
We utilise transfer learning with networks pretrained on ImageNet used directly as feature extractors and fine tuned on the target domain.
arXiv Detail & Related papers (2021-11-29T09:11:52Z) - Is Deep Image Prior in Need of a Good Education? [57.3399060347311]
Deep image prior was introduced as an effective prior for image reconstruction.
Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques.
We develop a two-stage learning paradigm to address the computational challenge.
arXiv Detail & Related papers (2021-11-23T15:08:26Z) - A Systematic Benchmarking Analysis of Transfer Learning for Medical
Image Analysis [7.339428207644444]
We conduct a systematic study on the transferability of models pre-trained on iNat2021, the most recent large-scale fine-grained dataset.
We present a practical approach to bridge the domain gap between natural and medical images by continually (pre-training) supervised ImageNet models on medical images.
arXiv Detail & Related papers (2021-08-12T19:08:34Z) - On the Robustness of Pretraining and Self-Supervision for a Deep
Learning-based Analysis of Diabetic Retinopathy [70.71457102672545]
We compare the impact of different training procedures for diabetic retinopathy grading.
We investigate different aspects such as quantitative performance, statistics of the learned feature representations, interpretability and robustness to image distortions.
Our results indicate that models from ImageNet pretraining report a significant increase in performance, generalization and robustness to image distortions.
arXiv Detail & Related papers (2021-06-25T08:32:45Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Improving Calibration and Out-of-Distribution Detection in Medical Image
Segmentation with Convolutional Neural Networks [8.219843232619551]
Convolutional Neural Networks (CNNs) have shown to be powerful medical image segmentation models.
We advocate for multi-task learning, i.e., training a single model on several different datasets.
We show that not only a single CNN learns to automatically recognize the context and accurately segment the organ of interest in each context, but also that such a joint model often has more accurate and better-calibrated predictions.
arXiv Detail & Related papers (2020-04-12T23:42:51Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.