Classification of Diabetic Retinopathy Using Unlabeled Data and
Knowledge Distillation
- URL: http://arxiv.org/abs/2009.00982v1
- Date: Tue, 1 Sep 2020 07:18:39 GMT
- Title: Classification of Diabetic Retinopathy Using Unlabeled Data and
Knowledge Distillation
- Authors: Sajjad Abbasi, Mohsen Hajabdollahi, Pejman Khadivi, Nader Karimi,
Roshanak Roshandel, Shahram Shirani, Shadrokh Samavi
- Abstract summary: The proposed method transfers the entire knowledge of a model to a new smaller one.
Unlabeled data are used in an unsupervised manner to transfer the maximum amount of knowledge to the new slimmer model.
The proposed method can be beneficial in medical image analysis, where labeled data are typically scarce.
- Score: 10.032419030373399
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge distillation allows transferring knowledge from a pre-trained model
to another. However, it suffers from limitations, and constraints related to
the two models need to be architecturally similar. Knowledge distillation
addresses some of the shortcomings associated with transfer learning by
generalizing a complex model to a lighter model. However, some parts of the
knowledge may not be distilled by knowledge distillation sufficiently. In this
paper, a novel knowledge distillation approach using transfer learning is
proposed. The proposed method transfers the entire knowledge of a model to a
new smaller one. To accomplish this, unlabeled data are used in an unsupervised
manner to transfer the maximum amount of knowledge to the new slimmer model.
The proposed method can be beneficial in medical image analysis, where labeled
data are typically scarce. The proposed approach is evaluated in the context of
classification of images for diagnosing Diabetic Retinopathy on two publicly
available datasets, including Messidor and EyePACS. Simulation results
demonstrate that the approach is effective in transferring knowledge from a
complex model to a lighter one. Furthermore, experimental results illustrate
that the performance of different small models is improved significantly using
unlabeled data and knowledge distillation.
Related papers
- A metric learning approach for endoscopic kidney stone identification [0.879504058268139]
This paper exploits Deep Metric Learning (DML) methods to handle classes with few samples, ii) to generalize well to out of distribution samples, andiii) to cope better with new classes which are added to the database.
The proposed Guided Deep Metric Learning approach is based on a novel architecture which was designed to learn data representations in an improved way.
The teacher model (GEMINI) generates a reduced hypothesis space based on prior knowledge from the labeled data, and is used it as a guide to a student model (i.e., ResNet50) through a Knowledge Distillation scheme.
arXiv Detail & Related papers (2023-07-13T20:02:07Z) - Knowledge Distillation via Token-level Relationship Graph [12.356770685214498]
We propose a novel method called Knowledge Distillation with Token-level Relationship Graph (TRG)
By employing TRG, the student model can effectively emulate higher-level semantic information from the teacher model.
We conduct experiments to evaluate the effectiveness of the proposed method against several state-of-the-art approaches.
arXiv Detail & Related papers (2023-06-20T08:16:37Z) - Exploring Inconsistent Knowledge Distillation for Object Detection with
Data Augmentation [66.25738680429463]
Knowledge Distillation (KD) for object detection aims to train a compact detector by transferring knowledge from a teacher model.
We propose inconsistent knowledge distillation (IKD) which aims to distill knowledge inherent in the teacher model's counter-intuitive perceptions.
Our method outperforms state-of-the-art KD baselines on one-stage, two-stage and anchor-free object detectors.
arXiv Detail & Related papers (2022-09-20T16:36:28Z) - Continual Learning with Bayesian Model based on a Fixed Pre-trained
Feature Extractor [55.9023096444383]
Current deep learning models are characterised by catastrophic forgetting of old knowledge when learning new classes.
Inspired by the process of learning new knowledge in human brains, we propose a Bayesian generative model for continual learning.
arXiv Detail & Related papers (2022-04-28T08:41:51Z) - SSD-KD: A Self-supervised Diverse Knowledge Distillation Method for
Lightweight Skin Lesion Classification Using Dermoscopic Images [62.60956024215873]
Skin cancer is one of the most common types of malignancy, affecting a large population and causing a heavy economic burden worldwide.
Most studies in skin cancer detection keep pursuing high prediction accuracies without considering the limitation of computing resources on portable devices.
This study specifically proposes a novel method, termed SSD-KD, that unifies diverse knowledge into a generic KD framework for skin diseases classification.
arXiv Detail & Related papers (2022-03-22T06:54:29Z) - Relational Subsets Knowledge Distillation for Long-tailed Retinal
Diseases Recognition [65.77962788209103]
We propose class subset learning by dividing the long-tailed data into multiple class subsets according to prior knowledge.
It enforces the model to focus on learning the subset-specific knowledge.
The proposed framework proved to be effective for the long-tailed retinal diseases recognition task.
arXiv Detail & Related papers (2021-04-22T13:39:33Z) - Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype
Prediction [55.94378672172967]
We focus on few-shot disease subtype prediction problem, identifying subgroups of similar patients.
We introduce meta learning techniques to develop a new model, which can extract the common experience or knowledge from interrelated clinical tasks.
Our new model is built upon a carefully designed meta-learner, called Prototypical Network, that is a simple yet effective meta learning machine for few-shot image classification.
arXiv Detail & Related papers (2020-09-02T02:50:30Z) - Semi-supervised Medical Image Classification with Relation-driven
Self-ensembling Model [71.80319052891817]
We present a relation-driven semi-supervised framework for medical image classification.
It exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations.
Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
arXiv Detail & Related papers (2020-05-15T06:57:54Z) - Synergic Adversarial Label Learning for Grading Retinal Diseases via
Knowledge Distillation and Multi-task Learning [29.46896757506273]
Well-qualified doctors annotated images are very expensive and only a limited amount of data is available for various retinal diseases.
Some studies show that AMD and DR share some common features like hemorrhagic points and exudation but most classification algorithms only train those disease models independently.
We propose a method called synergic adversarial label learning (SALL) which leverages relevant retinal disease labels in both semantic and feature space as additional signals and train the model in a collaborative manner.
arXiv Detail & Related papers (2020-03-24T01:32:04Z) - Unlabeled Data Deployment for Classification of Diabetic Retinopathy
Images Using Knowledge Transfer [11.031841470875571]
Transfer learning is used to solve the problem of lack of labeled data.
Knowledge distillation is recently proposed to transfer the knowledge of a model to another one.
In this paper, a novel knowledge distillation using transfer learning is proposed to transfer the whole knowledge of a model to another one.
arXiv Detail & Related papers (2020-02-09T09:01:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.