A Systematic Benchmarking Analysis of Transfer Learning for Medical
Image Analysis
- URL: http://arxiv.org/abs/2108.05930v1
- Date: Thu, 12 Aug 2021 19:08:34 GMT
- Title: A Systematic Benchmarking Analysis of Transfer Learning for Medical
Image Analysis
- Authors: Mohammad Reza Hosseinzadeh Taher, Fatemeh Haghighi, Ruibin Feng,
Michael B. Gotway, Jianming Liang
- Abstract summary: We conduct a systematic study on the transferability of models pre-trained on iNat2021, the most recent large-scale fine-grained dataset.
We present a practical approach to bridge the domain gap between natural and medical images by continually (pre-training) supervised ImageNet models on medical images.
- Score: 7.339428207644444
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transfer learning from supervised ImageNet models has been frequently used in
medical image analysis. Yet, no large-scale evaluation has been conducted to
benchmark the efficacy of newly-developed pre-training techniques for medical
image analysis, leaving several important questions unanswered. As the first
step in this direction, we conduct a systematic study on the transferability of
models pre-trained on iNat2021, the most recent large-scale fine-grained
dataset, and 14 top self-supervised ImageNet models on 7 diverse medical tasks
in comparison with the supervised ImageNet model. Furthermore, we present a
practical approach to bridge the domain gap between natural and medical images
by continually (pre-)training supervised ImageNet models on medical images. Our
comprehensive evaluation yields new insights: (1) pre-trained models on
fine-grained data yield distinctive local representations that are more
suitable for medical segmentation tasks, (2) self-supervised ImageNet models
learn holistic features more effectively than supervised ImageNet models, and
(3) continual pre-training can bridge the domain gap between natural and
medical images. We hope that this large-scale open evaluation of transfer
learning can direct the future research of deep learning for medical imaging.
As open science, all codes and pre-trained models are available on our GitHub
page https://github.com/JLiangLab/BenchmarkTransferLearning.
Related papers
- From CNN to Transformer: A Review of Medical Image Segmentation Models [7.3150850275578145]
Deep learning for medical image segmentation has become a prevalent trend.
In this paper, we conduct a survey of the most representative four medical image segmentation models in recent years.
We theoretically analyze the characteristics of these models and quantitatively evaluate their performance on two benchmark datasets.
arXiv Detail & Related papers (2023-08-10T02:48:57Z) - Pick the Best Pre-trained Model: Towards Transferability Estimation for
Medical Image Segmentation [20.03177073703528]
Transfer learning is a critical technique in training deep neural networks for the challenging medical image segmentation task.
We propose a new Transferability Estimation (TE) method for medical image segmentation.
Our method surpasses all current algorithms for transferability estimation in medical image segmentation.
arXiv Detail & Related papers (2023-07-22T01:58:18Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Revisiting Hidden Representations in Transfer Learning for Medical
Imaging [2.4545492329339815]
We compare ImageNet and RadImageNet on seven medical classification tasks.
Our results indicate that, contrary to intuition, ImageNet and RadImageNet may converge to distinct intermediate representations.
Our findings show that the similarity between networks before and after fine-tuning does not correlate with performance gains.
arXiv Detail & Related papers (2023-02-16T13:04:59Z) - Understanding the Tricks of Deep Learning in Medical Image Segmentation:
Challenges and Future Directions [66.40971096248946]
In this paper, we collect a series of MedISeg tricks for different model implementation phases.
We experimentally explore the effectiveness of these tricks on consistent baselines.
We also open-sourced a strong MedISeg repository, where each component has the advantage of plug-and-play.
arXiv Detail & Related papers (2022-09-21T12:30:05Z) - On the Robustness of Pretraining and Self-Supervision for a Deep
Learning-based Analysis of Diabetic Retinopathy [70.71457102672545]
We compare the impact of different training procedures for diabetic retinopathy grading.
We investigate different aspects such as quantitative performance, statistics of the learned feature representations, interpretability and robustness to image distortions.
Our results indicate that models from ImageNet pretraining report a significant increase in performance, generalization and robustness to image distortions.
arXiv Detail & Related papers (2021-06-25T08:32:45Z) - Domain Generalization on Medical Imaging Classification using Episodic
Training with Task Augmentation [62.49837463676111]
We propose a novel scheme of episodic training with task augmentation on medical imaging classification.
Motivated by the limited number of source domains in real-world medical deployment, we consider the unique task-level overfitting.
arXiv Detail & Related papers (2021-06-13T03:56:59Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z) - Big Self-Supervised Models Advance Medical Image Classification [36.23989703428874]
We study the effectiveness of self-supervised learning as a pretraining strategy for medical image classification.
We use a novel Multi-Instance Contrastive Learning (MICLe) method that uses multiple images of the underlying pathology per patient case.
We show that big self-supervised models are robust to distribution shift and can learn efficiently with a small number of labeled medical images.
arXiv Detail & Related papers (2021-01-13T17:36:31Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Multi-task pre-training of deep neural networks for digital pathology [8.74883469030132]
We first assemble and transform many digital pathology datasets into a pool of 22 classification tasks and almost 900k images.
We show that our models used as feature extractors either improve significantly over ImageNet pre-trained models or provide comparable performance.
arXiv Detail & Related papers (2020-05-05T08:50:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.