A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis
- URL: http://arxiv.org/abs/2101.05410v1
- Date: Thu, 14 Jan 2021 01:39:19 GMT
- Title: A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis
- Authors: Yi Liu, Shuiwang Ji
- Abstract summary: We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
- Score: 49.3704402041314
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Computed tomography (CT) imaging is a promising approach to diagnosing the
COVID-19. Machine learning methods can be employed to train models from labeled
CT images and predict whether a case is positive or negative. However, there
exists no publicly-available and large-scale CT data to train accurate models.
In this work, we propose a multi-stage attentive transfer learning framework
for improving COVID-19 diagnosis. Our proposed framework consists of three
stages to train accurate diagnosis models through learning knowledge from
multiple source tasks and data of different domains. Importantly, we propose a
novel self-supervised learning method to learn multi-scale representations for
lung CT images. Our method captures semantic information from the whole lung
and highlights the functionality of each lung region for better representation
learning. The method is then integrated to the last stage of the proposed
transfer learning framework to reuse the complex patterns learned from the same
CT images. We use a base model integrating self-attention (ATTNs) and
convolutional operations. Experimental results show that networks with ATTNs
induce greater performance improvement through transfer learning than networks
without ATTNs. This indicates attention exhibits higher transferability than
convolution. Our results also show that the proposed self-supervised learning
method outperforms several baseline methods.
Related papers
- CC-DCNet: Dynamic Convolutional Neural Network with Contrastive Constraints for Identifying Lung Cancer Subtypes on Multi-modality Images [13.655407979403945]
We propose a novel deep learning network designed to accurately classify lung cancer subtype with multi-dimensional and multi-modality images.
The strength of the proposed model lies in its ability to dynamically process both paired CT-pathological image sets and independent CT image sets.
We also develop a contrastive constraint module, which quantitatively maps the cross-modality associations through network training.
arXiv Detail & Related papers (2024-07-18T01:42:00Z) - Learned Image resizing with efficient training (LRET) facilitates
improved performance of large-scale digital histopathology image
classification models [0.0]
Histologic examination plays a crucial role in oncology research and diagnostics.
Current approaches to training deep convolutional neural networks (DCNN) result in suboptimal model performance.
We introduce a novel approach that addresses the main limitations of traditional histopathology classification model training.
arXiv Detail & Related papers (2024-01-19T23:45:47Z) - MUSCLE: Multi-task Self-supervised Continual Learning to Pre-train Deep
Models for X-ray Images of Multiple Body Parts [63.30352394004674]
Multi-task Self-super-vised Continual Learning (MUSCLE) is a novel self-supervised pre-training pipeline for medical imaging tasks.
MUSCLE aggregates X-rays collected from multiple body parts for representation learning, and adopts a well-designed continual learning procedure.
We evaluate MUSCLE using 9 real-world X-ray datasets with various tasks, including pneumonia classification, skeletal abnormality classification, lung segmentation, and tuberculosis (TB) detection.
arXiv Detail & Related papers (2023-10-03T12:19:19Z) - Pick the Best Pre-trained Model: Towards Transferability Estimation for
Medical Image Segmentation [20.03177073703528]
Transfer learning is a critical technique in training deep neural networks for the challenging medical image segmentation task.
We propose a new Transferability Estimation (TE) method for medical image segmentation.
Our method surpasses all current algorithms for transferability estimation in medical image segmentation.
arXiv Detail & Related papers (2023-07-22T01:58:18Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Domain Generalization for Mammographic Image Analysis with Contrastive
Learning [62.25104935889111]
The training of an efficacious deep learning model requires large data with diverse styles and qualities.
A novel contrastive learning is developed to equip the deep learning models with better style generalization capability.
The proposed method has been evaluated extensively and rigorously with mammograms from various vendor style domains and several public datasets.
arXiv Detail & Related papers (2023-04-20T11:40:21Z) - Self-supervised Model Based on Masked Autoencoders Advance CT Scans
Classification [0.0]
This paper is inspired by the self-supervised learning algorithm MAE.
It uses the MAE model pre-trained on ImageNet to perform transfer learning on CT Scans dataset.
This method improves the generalization performance of the model and avoids the risk of overfitting on small datasets.
arXiv Detail & Related papers (2022-10-11T00:52:05Z) - Incremental Cross-view Mutual Distillation for Self-supervised Medical
CT Synthesis [88.39466012709205]
This paper builds a novel medical slice to increase the between-slice resolution.
Considering that the ground-truth intermediate medical slices are always absent in clinical practice, we introduce the incremental cross-view mutual distillation strategy.
Our method outperforms state-of-the-art algorithms by clear margins.
arXiv Detail & Related papers (2021-12-20T03:38:37Z) - Contrastive Cross-site Learning with Redesigned Net for COVID-19 CT
Classification [20.66003113364796]
The pandemic of coronavirus disease 2019 (COVID-19) has lead to a global public health crisis spreading hundreds of countries.
To assist the clinical diagnosis and reduce the tedious workload of image interpretation, developing automated tools for COVID-19 identification with CT image is highly desired.
This paper proposes a novel joint learning framework to perform accurate COVID-19 identification by effectively learning with heterogeneous datasets.
arXiv Detail & Related papers (2020-09-15T11:09:04Z) - Diagnosis of Coronavirus Disease 2019 (COVID-19) with Structured Latent
Multi-View Representation Learning [48.05232274463484]
Recently, the outbreak of Coronavirus Disease 2019 (COVID-19) has spread rapidly across the world.
Due to the large number of affected patients and heavy labor for doctors, computer-aided diagnosis with machine learning algorithm is urgently needed.
In this study, we propose to conduct the diagnosis of COVID-19 with a series of features extracted from CT images.
arXiv Detail & Related papers (2020-05-06T15:19:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.