Pick the Best Pre-trained Model: Towards Transferability Estimation for
Medical Image Segmentation
- URL: http://arxiv.org/abs/2307.11958v1
- Date: Sat, 22 Jul 2023 01:58:18 GMT
- Title: Pick the Best Pre-trained Model: Towards Transferability Estimation for
Medical Image Segmentation
- Authors: Yuncheng Yang, Meng Wei, Junjun He, Jie Yang, Jin Ye, Yun Gu
- Abstract summary: Transfer learning is a critical technique in training deep neural networks for the challenging medical image segmentation task.
We propose a new Transferability Estimation (TE) method for medical image segmentation.
Our method surpasses all current algorithms for transferability estimation in medical image segmentation.
- Score: 20.03177073703528
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transfer learning is a critical technique in training deep neural networks
for the challenging medical image segmentation task that requires enormous
resources. With the abundance of medical image data, many research institutions
release models trained on various datasets that can form a huge pool of
candidate source models to choose from. Hence, it's vital to estimate the
source models' transferability (i.e., the ability to generalize across
different downstream tasks) for proper and efficient model reuse. To make up
for its deficiency when applying transfer learning to medical image
segmentation, in this paper, we therefore propose a new Transferability
Estimation (TE) method. We first analyze the drawbacks of using the existing TE
algorithms for medical image segmentation and then design a source-free TE
framework that considers both class consistency and feature variety for better
estimation. Extensive experiments show that our method surpasses all current
algorithms for transferability estimation in medical image segmentation. Code
is available at https://github.com/EndoluminalSurgicalVision-IMR/CCFV
Related papers
- From CNN to Transformer: A Review of Medical Image Segmentation Models [7.3150850275578145]
Deep learning for medical image segmentation has become a prevalent trend.
In this paper, we conduct a survey of the most representative four medical image segmentation models in recent years.
We theoretically analyze the characteristics of these models and quantitatively evaluate their performance on two benchmark datasets.
arXiv Detail & Related papers (2023-08-10T02:48:57Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Learnable Weight Initialization for Volumetric Medical Image Segmentation [66.3030435676252]
We propose a learnable weight-based hybrid medical image segmentation approach.
Our approach is easy to integrate into any hybrid model and requires no external training data.
Experiments on multi-organ and lung cancer segmentation tasks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-06-15T17:55:05Z) - Domain Generalization for Mammographic Image Analysis with Contrastive
Learning [62.25104935889111]
The training of an efficacious deep learning model requires large data with diverse styles and qualities.
A novel contrastive learning is developed to equip the deep learning models with better style generalization capability.
The proposed method has been evaluated extensively and rigorously with mammograms from various vendor style domains and several public datasets.
arXiv Detail & Related papers (2023-04-20T11:40:21Z) - Pre-text Representation Transfer for Deep Learning with Limited
Imbalanced Data : Application to CT-based COVID-19 Detection [18.72489078928417]
We propose a novel concept of Pre-text Representation Transfer (PRT)
PRT retains the original classification layers and updates the representation layers through an unsupervised pre-text task.
Our results show a consistent gain over the conventional transfer learning with the proposed method.
arXiv Detail & Related papers (2023-01-21T04:47:35Z) - A Systematic Benchmarking Analysis of Transfer Learning for Medical
Image Analysis [7.339428207644444]
We conduct a systematic study on the transferability of models pre-trained on iNat2021, the most recent large-scale fine-grained dataset.
We present a practical approach to bridge the domain gap between natural and medical images by continually (pre-training) supervised ImageNet models on medical images.
arXiv Detail & Related papers (2021-08-12T19:08:34Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Domain Generalization for Medical Imaging Classification with
Linear-Dependency Regularization [59.5104563755095]
We introduce a simple but effective approach to improve the generalization capability of deep neural networks in the field of medical imaging classification.
Motivated by the observation that the domain variability of the medical images is to some extent compact, we propose to learn a representative feature space through variational encoding.
arXiv Detail & Related papers (2020-09-27T12:30:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.