Pre-text Representation Transfer for Deep Learning with Limited
Imbalanced Data : Application to CT-based COVID-19 Detection
- URL: http://arxiv.org/abs/2301.08888v1
- Date: Sat, 21 Jan 2023 04:47:35 GMT
- Title: Pre-text Representation Transfer for Deep Learning with Limited
Imbalanced Data : Application to CT-based COVID-19 Detection
- Authors: Fouzia Altaf, Syed M. S. Islam, Naeem K. Janjua, Naveed Akhtar
- Abstract summary: We propose a novel concept of Pre-text Representation Transfer (PRT)
PRT retains the original classification layers and updates the representation layers through an unsupervised pre-text task.
Our results show a consistent gain over the conventional transfer learning with the proposed method.
- Score: 18.72489078928417
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Annotating medical images for disease detection is often tedious and
expensive. Moreover, the available training samples for a given task are
generally scarce and imbalanced. These conditions are not conducive for
learning effective deep neural models. Hence, it is common to 'transfer' neural
networks trained on natural images to the medical image domain. However, this
paradigm lacks in performance due to the large domain gap between the natural
and medical image data. To address that, we propose a novel concept of Pre-text
Representation Transfer (PRT). In contrast to the conventional transfer
learning, which fine-tunes a source model after replacing its classification
layers, PRT retains the original classification layers and updates the
representation layers through an unsupervised pre-text task. The task is
performed with (original, not synthetic) medical images, without utilizing any
annotations. This enables representation transfer with a large amount of
training data. This high-fidelity representation transfer allows us to use the
resulting model as a more effective feature extractor. Moreover, we can also
subsequently perform the traditional transfer learning with this model. We
devise a collaborative representation based classification layer for the case
when we leverage the model as a feature extractor. We fuse the output of this
layer with the predictions of a model induced with the traditional transfer
learning performed over our pre-text transferred model. The utility of our
technique for limited and imbalanced data classification problem is
demonstrated with an extensive five-fold evaluation for three large-scale
models, tested for five different class-imbalance ratios for CT based COVID-19
detection. Our results show a consistent gain over the conventional transfer
learning with the proposed method.
Related papers
- Pick the Best Pre-trained Model: Towards Transferability Estimation for
Medical Image Segmentation [20.03177073703528]
Transfer learning is a critical technique in training deep neural networks for the challenging medical image segmentation task.
We propose a new Transferability Estimation (TE) method for medical image segmentation.
Our method surpasses all current algorithms for transferability estimation in medical image segmentation.
arXiv Detail & Related papers (2023-07-22T01:58:18Z) - Performance of GAN-based augmentation for deep learning COVID-19 image
classification [57.1795052451257]
The biggest challenge in the application of deep learning to the medical domain is the availability of training data.
Data augmentation is a typical methodology used in machine learning when confronted with a limited data set.
In this work, a StyleGAN2-ADA model of Generative Adversarial Networks is trained on the limited COVID-19 chest X-ray image set.
arXiv Detail & Related papers (2023-04-18T15:39:58Z) - Unsupervised Domain Transfer with Conditional Invertible Neural Networks [83.90291882730925]
We propose a domain transfer approach based on conditional invertible neural networks (cINNs)
Our method inherently guarantees cycle consistency through its invertible architecture, and network training can efficiently be conducted with maximum likelihood.
Our method enables the generation of realistic spectral data and outperforms the state of the art on two downstream classification tasks.
arXiv Detail & Related papers (2023-03-17T18:00:27Z) - Revisiting Hidden Representations in Transfer Learning for Medical
Imaging [2.4545492329339815]
We compare ImageNet and RadImageNet on seven medical classification tasks.
Our results indicate that, contrary to intuition, ImageNet and RadImageNet may converge to distinct intermediate representations.
Our findings show that the similarity between networks before and after fine-tuning does not correlate with performance gains.
arXiv Detail & Related papers (2023-02-16T13:04:59Z) - PatchNR: Learning from Small Data by Patch Normalizing Flow
Regularization [57.37911115888587]
We introduce a regularizer for the variational modeling of inverse problems in imaging based on normalizing flows.
Our regularizer, called patchNR, involves a normalizing flow learned on patches of very few images.
arXiv Detail & Related papers (2022-05-24T12:14:26Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z) - Classification of COVID-19 in CT Scans using Multi-Source Transfer
Learning [91.3755431537592]
We propose the use of Multi-Source Transfer Learning to improve upon traditional Transfer Learning for the classification of COVID-19 from CT scans.
With our multi-source fine-tuning approach, our models outperformed baseline models fine-tuned with ImageNet.
Our best performing model was able to achieve an accuracy of 0.893 and a Recall score of 0.897, outperforming its baseline Recall score by 9.3%.
arXiv Detail & Related papers (2020-09-22T11:53:06Z) - Adversarially-Trained Deep Nets Transfer Better: Illustration on Image
Classification [53.735029033681435]
Transfer learning is a powerful methodology for adapting pre-trained deep neural networks on image recognition tasks to new domains.
In this work, we demonstrate that adversarially-trained models transfer better than non-adversarially-trained models.
arXiv Detail & Related papers (2020-07-11T22:48:42Z) - Generalized Zero and Few-Shot Transfer for Facial Forgery Detection [3.8073142980733]
We propose a new transfer learning approach to address the problem of zero and few-shot transfer in the context of forgery detection.
We find this learning strategy to be surprisingly effective at domain transfer compared to a traditional classification or even state-of-the-art domain adaptation/few-shot learning methods.
arXiv Detail & Related papers (2020-06-21T18:10:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.