Unlabeled Data Deployment for Classification of Diabetic Retinopathy
Images Using Knowledge Transfer
- URL: http://arxiv.org/abs/2002.03321v1
- Date: Sun, 9 Feb 2020 09:01:11 GMT
- Title: Unlabeled Data Deployment for Classification of Diabetic Retinopathy
Images Using Knowledge Transfer
- Authors: Sajjad Abbasi, Mohsen Hajabdollahi, Nader Karimi, Shadrokh Samavi,
Shahram Shirani
- Abstract summary: Transfer learning is used to solve the problem of lack of labeled data.
Knowledge distillation is recently proposed to transfer the knowledge of a model to another one.
In this paper, a novel knowledge distillation using transfer learning is proposed to transfer the whole knowledge of a model to another one.
- Score: 11.031841470875571
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolutional neural networks (CNNs) are extensively beneficial for medical
image processing. Medical images are plentiful, but there is a lack of
annotated data. Transfer learning is used to solve the problem of lack of
labeled data and grants CNNs better training capability. Transfer learning can
be used in many different medical applications; however, the model under
transfer should have the same size as the original network. Knowledge
distillation is recently proposed to transfer the knowledge of a model to
another one and can be useful to cover the shortcomings of transfer learning.
But some parts of the knowledge may not be distilled by knowledge distillation.
In this paper, a novel knowledge distillation using transfer learning is
proposed to transfer the whole knowledge of a model to another one. The
proposed method can be beneficial and practical for medical image analysis in
which a small number of labeled data are available. The proposed process is
tested for diabetic retinopathy classification. Simulation results demonstrate
that using the proposed method, knowledge of an extensive network can be
transferred to a smaller model.
Related papers
- Pick the Best Pre-trained Model: Towards Transferability Estimation for
Medical Image Segmentation [20.03177073703528]
Transfer learning is a critical technique in training deep neural networks for the challenging medical image segmentation task.
We propose a new Transferability Estimation (TE) method for medical image segmentation.
Our method surpasses all current algorithms for transferability estimation in medical image segmentation.
arXiv Detail & Related papers (2023-07-22T01:58:18Z) - Pre-text Representation Transfer for Deep Learning with Limited
Imbalanced Data : Application to CT-based COVID-19 Detection [18.72489078928417]
We propose a novel concept of Pre-text Representation Transfer (PRT)
PRT retains the original classification layers and updates the representation layers through an unsupervised pre-text task.
Our results show a consistent gain over the conventional transfer learning with the proposed method.
arXiv Detail & Related papers (2023-01-21T04:47:35Z) - Classification of EEG Motor Imagery Using Deep Learning for
Brain-Computer Interface Systems [79.58173794910631]
A trained T1 class Convolutional Neural Network (CNN) model will be used to examine its ability to successfully identify motor imagery.
In theory, and if the model has been trained accurately, it should be able to identify a class and label it accordingly.
The CNN model will then be restored and used to try and identify the same class of motor imagery data using much smaller sampled data.
arXiv Detail & Related papers (2022-05-31T17:09:46Z) - BERT WEAVER: Using WEight AVERaging to enable lifelong learning for
transformer-based models in biomedical semantic search engines [49.75878234192369]
We present WEAVER, a simple, yet efficient post-processing method that infuses old knowledge into the new model.
We show that applying WEAVER in a sequential manner results in similar word embedding distributions as doing a combined training on all data at once.
arXiv Detail & Related papers (2022-02-21T10:34:41Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Classification of COVID-19 in CT Scans using Multi-Source Transfer
Learning [91.3755431537592]
We propose the use of Multi-Source Transfer Learning to improve upon traditional Transfer Learning for the classification of COVID-19 from CT scans.
With our multi-source fine-tuning approach, our models outperformed baseline models fine-tuned with ImageNet.
Our best performing model was able to achieve an accuracy of 0.893 and a Recall score of 0.897, outperforming its baseline Recall score by 9.3%.
arXiv Detail & Related papers (2020-09-22T11:53:06Z) - Classification of Diabetic Retinopathy Using Unlabeled Data and
Knowledge Distillation [10.032419030373399]
The proposed method transfers the entire knowledge of a model to a new smaller one.
Unlabeled data are used in an unsupervised manner to transfer the maximum amount of knowledge to the new slimmer model.
The proposed method can be beneficial in medical image analysis, where labeled data are typically scarce.
arXiv Detail & Related papers (2020-09-01T07:18:39Z) - Adversarially-Trained Deep Nets Transfer Better: Illustration on Image
Classification [53.735029033681435]
Transfer learning is a powerful methodology for adapting pre-trained deep neural networks on image recognition tasks to new domains.
In this work, we demonstrate that adversarially-trained models transfer better than non-adversarially-trained models.
arXiv Detail & Related papers (2020-07-11T22:48:42Z) - Adversarial Multi-Source Transfer Learning in Healthcare: Application to
Glucose Prediction for Diabetic People [4.17510581764131]
We propose a multi-source adversarial transfer learning framework that enables the learning of a feature representation that is similar across the sources.
We apply this idea to glucose forecasting for diabetic people using a fully convolutional neural network.
In particular, it shines when using data from different datasets, or when there is too little data in an intra-dataset situation.
arXiv Detail & Related papers (2020-06-29T11:17:50Z) - Synergic Adversarial Label Learning for Grading Retinal Diseases via
Knowledge Distillation and Multi-task Learning [29.46896757506273]
Well-qualified doctors annotated images are very expensive and only a limited amount of data is available for various retinal diseases.
Some studies show that AMD and DR share some common features like hemorrhagic points and exudation but most classification algorithms only train those disease models independently.
We propose a method called synergic adversarial label learning (SALL) which leverages relevant retinal disease labels in both semantic and feature space as additional signals and train the model in a collaborative manner.
arXiv Detail & Related papers (2020-03-24T01:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.