Supervised Transfer Learning at Scale for Medical Imaging
- URL: http://arxiv.org/abs/2101.05913v3
- Date: Thu, 21 Jan 2021 18:07:21 GMT
- Title: Supervised Transfer Learning at Scale for Medical Imaging
- Authors: Basil Mustafa, Aaron Loh, Jan Freyberg, Patricia MacWilliams, Megan
Wilson, Scott Mayer McKinney, Marcin Sieniek, Jim Winkens, Yuan Liu, Peggy
Bui, Shruthi Prabhakara, Umesh Telang, Alan Karthikesalingam, Neil Houlsby
and Vivek Natarajan
- Abstract summary: We investigate whether modern methods can change the fortune of transfer learning for medical imaging.
We study the class of large-scale pre-trained networks presented by Kolesnikov et al. on three diverse imaging tasks.
We find that for some of these properties transfer from natural to medical images is indeed extremely effective, but only when performed at sufficient scale.
- Score: 8.341246672632582
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transfer learning is a standard technique to improve performance on tasks
with limited data. However, for medical imaging, the value of transfer learning
is less clear. This is likely due to the large domain mismatch between the
usual natural-image pre-training (e.g. ImageNet) and medical images. However,
recent advances in transfer learning have shown substantial improvements from
scale. We investigate whether modern methods can change the fortune of transfer
learning for medical imaging. For this, we study the class of large-scale
pre-trained networks presented by Kolesnikov et al. on three diverse imaging
tasks: chest radiography, mammography, and dermatology. We study both transfer
performance and critical properties for the deployment in the medical domain,
including: out-of-distribution generalization, data-efficiency, sub-group
fairness, and uncertainty estimation. Interestingly, we find that for some of
these properties transfer from natural to medical images is indeed extremely
effective, but only when performed at sufficient scale.
Related papers
- Adversarial-Robust Transfer Learning for Medical Imaging via Domain
Assimilation [17.46080957271494]
The scarcity of publicly available medical images has led contemporary algorithms to depend on pretrained models grounded on a large set of natural images.
A significant em domain discrepancy exists between natural and medical images, which causes AI models to exhibit heightened em vulnerability to adversarial attacks.
This paper proposes a em domain assimilation approach that introduces texture and color adaptation into transfer learning, followed by a texture preservation component to suppress undesired distortion.
arXiv Detail & Related papers (2024-02-25T06:39:15Z) - Performance of GAN-based augmentation for deep learning COVID-19 image
classification [57.1795052451257]
The biggest challenge in the application of deep learning to the medical domain is the availability of training data.
Data augmentation is a typical methodology used in machine learning when confronted with a limited data set.
In this work, a StyleGAN2-ADA model of Generative Adversarial Networks is trained on the limited COVID-19 chest X-ray image set.
arXiv Detail & Related papers (2023-04-18T15:39:58Z) - Revisiting Hidden Representations in Transfer Learning for Medical
Imaging [2.4545492329339815]
We compare ImageNet and RadImageNet on seven medical classification tasks.
Our results indicate that, contrary to intuition, ImageNet and RadImageNet may converge to distinct intermediate representations.
Our findings show that the similarity between networks before and after fine-tuning does not correlate with performance gains.
arXiv Detail & Related papers (2023-02-16T13:04:59Z) - Quantitative Imaging Principles Improves Medical Image Learning [0.0]
We propose incorporating quantitative imaging principles during generative SSL to improve image quality and quantitative biological accuracy.
Our model also generates images that validate on clinical quantitative analysis software.
arXiv Detail & Related papers (2022-06-14T07:51:49Z) - What Makes Transfer Learning Work For Medical Images: Feature Reuse &
Other Factors [1.5207770161985628]
It is unclear what factors determine whether - and to what extent - transfer learning to the medical domain is useful.
We explore the relationship between transfer learning, data size, the capacity and inductive bias of the model, as well as the distance between the source and target domain.
arXiv Detail & Related papers (2022-03-02T10:13:11Z) - Domain Generalization on Medical Imaging Classification using Episodic
Training with Task Augmentation [62.49837463676111]
We propose a novel scheme of episodic training with task augmentation on medical imaging classification.
Motivated by the limited number of source domains in real-world medical deployment, we consider the unique task-level overfitting.
arXiv Detail & Related papers (2021-06-13T03:56:59Z) - Factors of Influence for Transfer Learning across Diverse Appearance
Domains and Task Types [50.1843146606122]
A simple form of transfer learning is common in current state-of-the-art computer vision models.
Previous systematic studies of transfer learning have been limited and the circumstances in which it is expected to work are not fully understood.
In this paper we carry out an extensive experimental exploration of transfer learning across vastly different image domains.
arXiv Detail & Related papers (2021-03-24T16:24:20Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z) - Distant Domain Transfer Learning for Medical Imaging [14.806736041145964]
We propose a distant domain transfer learning (DDTL) method for medical image classification.
Several current studies indicate that lung Computed Tomography (CT) images can be used for a fast and accurate COVID-19 diagnosis.
The proposed method benefits from unlabeled data collected from distant domains which can be easily accessed.
It has achieved 96% classification accuracy, which is 13% higher classification accuracy than "non-transfer" algorithms.
arXiv Detail & Related papers (2020-12-10T02:53:52Z) - Multi-label Thoracic Disease Image Classification with Cross-Attention
Networks [65.37531731899837]
We propose a novel scheme of Cross-Attention Networks (CAN) for automated thoracic disease classification from chest x-ray images.
We also design a new loss function that beyond cross-entropy loss to help cross-attention process and is able to overcome the imbalance between classes and easy-dominated samples within each class.
arXiv Detail & Related papers (2020-07-21T14:37:00Z) - ElixirNet: Relation-aware Network Architecture Adaptation for Medical
Lesion Detection [90.13718478362337]
We introduce a novel ElixirNet that includes three components: 1) TruncatedRPN balances positive and negative data for false positive reduction; 2) Auto-lesion Block is automatically customized for medical images to incorporate relation-aware operations among region proposals; and 3) Relation transfer module incorporates the semantic relationship.
Experiments on DeepLesion and Kits19 prove the effectiveness of ElixirNet, achieving improvement of both sensitivity and precision over FPN with fewer parameters.
arXiv Detail & Related papers (2020-03-03T05:29:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.