Generative Transfer Learning: Covid-19 Classification with a few Chest
X-ray Images
- URL: http://arxiv.org/abs/2208.05305v1
- Date: Wed, 10 Aug 2022 12:37:52 GMT
- Title: Generative Transfer Learning: Covid-19 Classification with a few Chest
X-ray Images
- Authors: Suvarna Kadam and Vinay G. Vaidya
- Abstract summary: Deep learning models can expedite interpretation and alleviate the work of human experts.
Deep Transfer Learning addresses this problem by using a pretrained model in the public domain.
We present 1 a simpler generative source model, pretrained on a single but related concept, can perform as effectively as existing larger pretrained models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Detection of diseases through medical imaging is preferred due to its
non-invasive nature. Medical imaging supports multiple modalities of data that
enable a thorough and quick look inside a human body. However, interpreting
imaging data is often time-consuming and requires a great deal of human
expertise. Deep learning models can expedite interpretation and alleviate the
work of human experts. However, these models are data-intensive and require
significant labeled images for training. During novel disease outbreaks such as
Covid-19, we often do not have the required labeled imaging data, especially at
the start of the epidemic. Deep Transfer Learning addresses this problem by
using a pretrained model in the public domain, e.g. any variant of either
VGGNet, ResNet, Inception, DenseNet, etc., as a feature learner to quickly
adapt the target task from fewer samples. Most pretrained models are deep with
complex architectures. They are trained with large multi-class datasets such as
ImageNet, with significant human efforts in architecture design and hyper
parameters tuning. We presented 1 a simpler generative source model, pretrained
on a single but related concept, can perform as effectively as existing larger
pretrained models. We demonstrate the usefulness of generative transfer
learning that requires less compute and training data, for Few Shot Learning
(FSL) with a Covid-19 binary classification use case. We compare classic deep
transfer learning with our approach and also report FSL results with three
settings of 84, 20, and 10 training samples. The model implementation of
generative FSL for Covid-19 classification is available publicly at
https://github.com/suvarnak/GenerativeFSLCovid.git.
Related papers
- Navigating Data Scarcity using Foundation Models: A Benchmark of Few-Shot and Zero-Shot Learning Approaches in Medical Imaging [1.533133219129073]
Data scarcity is a major limiting factor for applying modern machine learning techniques to clinical tasks.
We conducted a benchmark study of few-shot learning and zero-shot learning using 16 pretrained foundation models on 19 diverse medical imaging datasets.
Our results indicate that BiomedCLIP, a model pretrained exclusively on medical data, performs best on average for very small training set sizes.
arXiv Detail & Related papers (2024-08-15T09:55:51Z) - Meta-Transfer Derm-Diagnosis: Exploring Few-Shot Learning and Transfer Learning for Skin Disease Classification in Long-Tail Distribution [1.8024397171920885]
This study conducts a detailed examination of the benefits and drawbacks of episodic and conventional training methodologies.
With minimal labeled examples, our models showed substantial information gains and better performance compared to previously trained models.
Our experiments, ranging from 2-way to 5-way classifications with up to 10 examples, showed a growing success rate for traditional transfer learning methods.
arXiv Detail & Related papers (2024-04-25T17:56:45Z) - FreeSeg-Diff: Training-Free Open-Vocabulary Segmentation with Diffusion Models [56.71672127740099]
We focus on the task of image segmentation, which is traditionally solved by training models on closed-vocabulary datasets.
We leverage different and relatively small-sized, open-source foundation models for zero-shot open-vocabulary segmentation.
Our approach (dubbed FreeSeg-Diff), which does not rely on any training, outperforms many training-based approaches on both Pascal VOC and COCO datasets.
arXiv Detail & Related papers (2024-03-29T10:38:25Z) - Learned representation-guided diffusion models for large-image generation [58.192263311786824]
We introduce a novel approach that trains diffusion models conditioned on embeddings from self-supervised learning (SSL)
Our diffusion models successfully project these features back to high-quality histopathology and remote sensing images.
Augmenting real data by generating variations of real images improves downstream accuracy for patch-level and larger, image-scale classification tasks.
arXiv Detail & Related papers (2023-12-12T14:45:45Z) - Image Captions are Natural Prompts for Text-to-Image Models [70.30915140413383]
We analyze the relationship between the training effect of synthetic data and the synthetic data distribution induced by prompts.
We propose a simple yet effective method that prompts text-to-image generative models to synthesize more informative and diverse training data.
Our method significantly improves the performance of models trained on synthetic training data.
arXiv Detail & Related papers (2023-07-17T14:38:11Z) - Performance of GAN-based augmentation for deep learning COVID-19 image
classification [57.1795052451257]
The biggest challenge in the application of deep learning to the medical domain is the availability of training data.
Data augmentation is a typical methodology used in machine learning when confronted with a limited data set.
In this work, a StyleGAN2-ADA model of Generative Adversarial Networks is trained on the limited COVID-19 chest X-ray image set.
arXiv Detail & Related papers (2023-04-18T15:39:58Z) - EfficientTrain: Exploring Generalized Curriculum Learning for Training
Visual Backbones [80.662250618795]
This paper presents a new curriculum learning approach for the efficient training of visual backbones (e.g., vision Transformers)
As an off-the-shelf method, it reduces the wall-time training cost of a wide variety of popular models by >1.5x on ImageNet-1K/22K without sacrificing accuracy.
arXiv Detail & Related papers (2022-11-17T17:38:55Z) - Few-Shot Transfer Learning to improve Chest X-Ray pathology detection
using limited triplets [0.0]
Deep learning approaches have reached near-human or better-than-human performance on many diagnostic tasks.
We introduce a practical approach to improve the predictions of a pre-trained model through Few-Shot Learning.
arXiv Detail & Related papers (2022-04-16T15:44:56Z) - KNN-Diffusion: Image Generation via Large-Scale Retrieval [40.6656651653888]
Learning to adapt enables several new capabilities.
Fine-tuning trained models to new samples can be achieved by simply adding them to the table.
Our diffusion-based model trains on images only, by leveraging a joint Text-Image multi-modal metric.
arXiv Detail & Related papers (2022-04-06T14:13:35Z) - One Representative-Shot Learning Using a Population-Driven Template with
Application to Brain Connectivity Classification and Evolution Prediction [0.0]
Graph neural networks (GNNs) have been introduced to the field of network neuroscience.
We take a very different approach in training GNNs, where we aim to learn with one sample and achieve the best performance.
We present the first one-shot paradigm where a GNN is trained on a single population-driven template.
arXiv Detail & Related papers (2021-10-06T08:36:00Z) - Classification of COVID-19 in CT Scans using Multi-Source Transfer
Learning [91.3755431537592]
We propose the use of Multi-Source Transfer Learning to improve upon traditional Transfer Learning for the classification of COVID-19 from CT scans.
With our multi-source fine-tuning approach, our models outperformed baseline models fine-tuned with ImageNet.
Our best performing model was able to achieve an accuracy of 0.893 and a Recall score of 0.897, outperforming its baseline Recall score by 9.3%.
arXiv Detail & Related papers (2020-09-22T11:53:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.