What Makes Transfer Learning Work For Medical Images: Feature Reuse &
Other Factors
- URL: http://arxiv.org/abs/2203.01825v1
- Date: Wed, 2 Mar 2022 10:13:11 GMT
- Title: What Makes Transfer Learning Work For Medical Images: Feature Reuse &
Other Factors
- Authors: Christos Matsoukas, Johan Fredin Haslum, Moein Sorkhei, Magnus
S\"oderberg, Kevin Smith
- Abstract summary: It is unclear what factors determine whether - and to what extent - transfer learning to the medical domain is useful.
We explore the relationship between transfer learning, data size, the capacity and inductive bias of the model, as well as the distance between the source and target domain.
- Score: 1.5207770161985628
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Transfer learning is a standard technique to transfer knowledge from one
domain to another. For applications in medical imaging, transfer from ImageNet
has become the de-facto approach, despite differences in the tasks and image
characteristics between the domains. However, it is unclear what factors
determine whether - and to what extent - transfer learning to the medical
domain is useful. The long-standing assumption that features from the source
domain get reused has recently been called into question. Through a series of
experiments on several medical image benchmark datasets, we explore the
relationship between transfer learning, data size, the capacity and inductive
bias of the model, as well as the distance between the source and target
domain. Our findings suggest that transfer learning is beneficial in most
cases, and we characterize the important role feature reuse plays in its
success.
Related papers
- Revisiting Hidden Representations in Transfer Learning for Medical
Imaging [2.4545492329339815]
We compare ImageNet and RadImageNet on seven medical classification tasks.
Our results indicate that, contrary to intuition, ImageNet and RadImageNet may converge to distinct intermediate representations.
Our findings show that the similarity between networks before and after fine-tuning does not correlate with performance gains.
arXiv Detail & Related papers (2023-02-16T13:04:59Z) - Factors of Influence for Transfer Learning across Diverse Appearance
Domains and Task Types [50.1843146606122]
A simple form of transfer learning is common in current state-of-the-art computer vision models.
Previous systematic studies of transfer learning have been limited and the circumstances in which it is expected to work are not fully understood.
In this paper we carry out an extensive experimental exploration of transfer learning across vastly different image domains.
arXiv Detail & Related papers (2021-03-24T16:24:20Z) - Supervised Transfer Learning at Scale for Medical Imaging [8.341246672632582]
We investigate whether modern methods can change the fortune of transfer learning for medical imaging.
We study the class of large-scale pre-trained networks presented by Kolesnikov et al. on three diverse imaging tasks.
We find that for some of these properties transfer from natural to medical images is indeed extremely effective, but only when performed at sufficient scale.
arXiv Detail & Related papers (2021-01-14T23:55:49Z) - DoFE: Domain-oriented Feature Embedding for Generalizable Fundus Image
Segmentation on Unseen Datasets [96.92018649136217]
We present a novel Domain-oriented Feature Embedding (DoFE) framework to improve the generalization ability of CNNs on unseen target domains.
Our DoFE framework dynamically enriches the image features with additional domain prior knowledge learned from multi-source domains.
Our framework generates satisfying segmentation results on unseen datasets and surpasses other domain generalization and network regularization methods.
arXiv Detail & Related papers (2020-10-13T07:28:39Z) - What is being transferred in transfer learning? [51.6991244438545]
We show that when training from pre-trained weights, the model stays in the same basin in the loss landscape.
We present that when training from pre-trained weights, the model stays in the same basin in the loss landscape and different instances of such model are similar in feature space and close in parameter space.
arXiv Detail & Related papers (2020-08-26T17:23:40Z) - Universal Model for Multi-Domain Medical Image Retrieval [88.67940265012638]
Medical Image Retrieval (MIR) helps doctors quickly find similar patients' data.
MIR is becoming increasingly helpful due to the wide use of digital imaging modalities.
However, the popularity of various digital imaging modalities in hospitals also poses several challenges to MIR.
arXiv Detail & Related papers (2020-07-14T23:22:04Z) - Adversarially-Trained Deep Nets Transfer Better: Illustration on Image
Classification [53.735029033681435]
Transfer learning is a powerful methodology for adapting pre-trained deep neural networks on image recognition tasks to new domains.
In this work, we demonstrate that adversarially-trained models transfer better than non-adversarially-trained models.
arXiv Detail & Related papers (2020-07-11T22:48:42Z) - How Much Off-The-Shelf Knowledge Is Transferable From Natural Images To
Pathology Images? [36.009216029815555]
Recent studies exploit transfer learning to reuse knowledge gained from natural images in pathology image analysis.
This paper proposes a framework to quantify knowledge gain by a particular layer, conducts an empirical investigation in pathology image centered transfer learning.
The general representation generated by early layers does convey transferred knowledge in various image classification applications.
arXiv Detail & Related papers (2020-04-24T21:29:10Z) - Explicit Domain Adaptation with Loosely Coupled Samples [85.9511585604837]
We propose a transfer learning framework, core of which is learning an explicit mapping between domains.
Due to its interpretability, this is beneficial for safety-critical applications, like autonomous driving.
arXiv Detail & Related papers (2020-04-24T21:23:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.