Boosting Deep Transfer Learning for COVID-19 Classification
- URL: http://arxiv.org/abs/2102.08085v1
- Date: Tue, 16 Feb 2021 11:15:23 GMT
- Title: Boosting Deep Transfer Learning for COVID-19 Classification
- Authors: Fouzia Altaf, Syed M.S. Islam, Naeem K. Janjua, Naveed Akhtar
- Abstract summary: COVID-19 classification using chest Computed Tomography (CT) has been found pragmatically useful.
It is still unknown if there are better strategies than vanilla transfer learning for more accurate COVID-19 classification with limited CT data.
This paper devises a novel model' augmentation technique that allows a considerable performance boost to transfer learning for the task.
- Score: 18.39034705389625
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: COVID-19 classification using chest Computed Tomography (CT) has been found
pragmatically useful by several studies. Due to the lack of annotated samples,
these studies recommend transfer learning and explore the choices of
pre-trained models and data augmentation. However, it is still unknown if there
are better strategies than vanilla transfer learning for more accurate COVID-19
classification with limited CT data. This paper provides an affirmative answer,
devising a novel `model' augmentation technique that allows a considerable
performance boost to transfer learning for the task. Our method systematically
reduces the distributional shift between the source and target domains and
considers augmenting deep learning with complementary representation learning
techniques. We establish the efficacy of our method with publicly available
datasets and models, along with identifying contrasting observations in the
previous studies.
Related papers
- The ART of Transfer Learning: An Adaptive and Robust Pipeline [2.294014185517203]
We propose Adaptive Robust Transfer Learning (ART), a flexible pipeline of performing transfer learning with generic machine learning algorithms.
We establish the non-asymptotic learning theory of ART, providing a provable theoretical guarantee for achieving adaptive transfer while preventing negative transfer.
We demonstrate the promising performance of ART through extensive empirical studies on regression, classification, and sparse learning.
arXiv Detail & Related papers (2023-04-30T16:36:57Z) - Performance of GAN-based augmentation for deep learning COVID-19 image
classification [57.1795052451257]
The biggest challenge in the application of deep learning to the medical domain is the availability of training data.
Data augmentation is a typical methodology used in machine learning when confronted with a limited data set.
In this work, a StyleGAN2-ADA model of Generative Adversarial Networks is trained on the limited COVID-19 chest X-ray image set.
arXiv Detail & Related papers (2023-04-18T15:39:58Z) - Learning to Generate Synthetic Training Data using Gradient Matching and
Implicit Differentiation [77.34726150561087]
This article explores various data distillation techniques that can reduce the amount of data required to successfully train deep networks.
Inspired by recent ideas, we suggest new data distillation techniques based on generative teaching networks, gradient matching, and the Implicit Function Theorem.
arXiv Detail & Related papers (2022-03-16T11:45:32Z) - Invariance Learning in Deep Neural Networks with Differentiable Laplace
Approximations [76.82124752950148]
We develop a convenient gradient-based method for selecting the data augmentation.
We use a differentiable Kronecker-factored Laplace approximation to the marginal likelihood as our objective.
arXiv Detail & Related papers (2022-02-22T02:51:11Z) - Embedding Transfer with Label Relaxation for Improved Metric Learning [43.94511888670419]
We present a novel method for embedding transfer, a task of transferring knowledge of a learned embedding model to another.
Our method exploits pairwise similarities between samples in the source embedding space as the knowledge, and transfers them through a loss used for learning target embedding models.
arXiv Detail & Related papers (2021-03-27T13:35:03Z) - Learning Invariant Representations across Domains and Tasks [81.30046935430791]
We propose a novel Task Adaptation Network (TAN) to solve this unsupervised task transfer problem.
In addition to learning transferable features via domain-adversarial training, we propose a novel task semantic adaptor that uses the learning-to-learn strategy to adapt the task semantics.
TAN significantly increases the recall and F1 score by 5.0% and 7.8% compared to recently strong baselines.
arXiv Detail & Related papers (2021-03-03T11:18:43Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z) - Classification of COVID-19 in CT Scans using Multi-Source Transfer
Learning [91.3755431537592]
We propose the use of Multi-Source Transfer Learning to improve upon traditional Transfer Learning for the classification of COVID-19 from CT scans.
With our multi-source fine-tuning approach, our models outperformed baseline models fine-tuned with ImageNet.
Our best performing model was able to achieve an accuracy of 0.893 and a Recall score of 0.897, outperforming its baseline Recall score by 9.3%.
arXiv Detail & Related papers (2020-09-22T11:53:06Z) - Domain Knowledge Integration By Gradient Matching For Sample-Efficient
Reinforcement Learning [0.0]
We propose a gradient matching algorithm to improve sample efficiency by utilizing target slope information from the dynamics to aid the model-free learner.
We demonstrate this by presenting a technique for matching the gradient information from the model-based learner with the model-free component in an abstract low-dimensional space.
arXiv Detail & Related papers (2020-05-28T05:02:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.