Evaluating Knowledge Transfer in Neural Network for Medical Images
- URL: http://arxiv.org/abs/2008.13574v2
- Date: Thu, 17 Sep 2020 21:33:31 GMT
- Title: Evaluating Knowledge Transfer in Neural Network for Medical Images
- Authors: Sina Akbarian, Laleh Seyyed-Kalantari, Farzad Khalvati, and Elham
Dolatabadi
- Abstract summary: We propose a teacher-student learning framework to transfer knowledge from a CNN teacher to a student CNN.
We investigate the proposed network's performance when the student network is trained on a small dataset.
Our results indicate that the teacher-student learning framework outperforms transfer learning for small datasets.
- Score: 0.18599311233727078
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning and knowledge transfer techniques have permeated the field of
medical imaging and are considered as key approaches for revolutionizing
diagnostic imaging practices. However, there are still challenges for the
successful integration of deep learning into medical imaging tasks due to a
lack of large annotated imaging data. To address this issue, we propose a
teacher-student learning framework to transfer knowledge from a carefully
pre-trained convolutional neural network (CNN) teacher to a student CNN. In
this study, we explore the performance of knowledge transfer in the medical
imaging setting. We investigate the proposed network's performance when the
student network is trained on a small dataset (target dataset) as well as when
teacher's and student's domains are distinct. The performances of the CNN
models are evaluated on three medical imaging datasets including Diabetic
Retinopathy, CheXpert, and ChestX-ray8. Our results indicate that the
teacher-student learning framework outperforms transfer learning for small
imaging datasets. Particularly, the teacher-student learning framework improves
the area under the ROC Curve (AUC) of the CNN model on a small sample of
CheXpert (n=5k) by 4% and on ChestX-ray8 (n=5.6k) by 9%. In addition to small
training data size, we also demonstrate a clear advantage of the
teacher-student learning framework in the medical imaging setting compared to
transfer learning. We observe that the teacher-student network holds a great
promise not only to improve the performance of diagnosis but also to reduce
overfitting when the dataset is small.
Related papers
- Disease Classification and Impact of Pretrained Deep Convolution Neural Networks on Diverse Medical Imaging Datasets across Imaging Modalities [0.0]
This paper investigates the intricacies of using pretrained deep convolutional neural networks with transfer learning across diverse medical imaging datasets.
It shows that the use of pretrained models as fixed feature extractors yields poor performance irrespective of the datasets.
It is also found that deeper and more complex architectures did not necessarily result in the best performance.
arXiv Detail & Related papers (2024-08-30T04:51:19Z) - Connecting the Dots: Graph Neural Network Powered Ensemble and
Classification of Medical Images [0.0]
Deep learning for medical imaging is limited due to the requirement for large amounts of training data.
We employ the Image Foresting Transform to optimally segment images into superpixels.
These superpixels are subsequently transformed into graph-structured data, enabling the proficient extraction of features and modeling of relationships.
arXiv Detail & Related papers (2023-11-13T13:20:54Z) - Transfer learning from a sparsely annotated dataset of 3D medical images [4.477071833136902]
This study explores the use of transfer learning to improve the performance of deep convolutional neural networks for organ segmentation in medical imaging.
A base segmentation model was trained on a large and sparsely annotated dataset; its weights were used for transfer learning on four new down-stream segmentation tasks.
The results showed that transfer learning from the base model was beneficial when small datasets were available.
arXiv Detail & Related papers (2023-11-08T21:31:02Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Performance of GAN-based augmentation for deep learning COVID-19 image
classification [57.1795052451257]
The biggest challenge in the application of deep learning to the medical domain is the availability of training data.
Data augmentation is a typical methodology used in machine learning when confronted with a limited data set.
In this work, a StyleGAN2-ADA model of Generative Adversarial Networks is trained on the limited COVID-19 chest X-ray image set.
arXiv Detail & Related papers (2023-04-18T15:39:58Z) - When Accuracy Meets Privacy: Two-Stage Federated Transfer Learning
Framework in Classification of Medical Images on Limited Data: A COVID-19
Case Study [77.34726150561087]
COVID-19 pandemic has spread rapidly and caused a shortage of global medical resources.
CNN has been widely utilized and verified in analyzing medical images.
arXiv Detail & Related papers (2022-03-24T02:09:41Z) - A Systematic Benchmarking Analysis of Transfer Learning for Medical
Image Analysis [7.339428207644444]
We conduct a systematic study on the transferability of models pre-trained on iNat2021, the most recent large-scale fine-grained dataset.
We present a practical approach to bridge the domain gap between natural and medical images by continually (pre-training) supervised ImageNet models on medical images.
arXiv Detail & Related papers (2021-08-12T19:08:34Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Multi-label Thoracic Disease Image Classification with Cross-Attention
Networks [65.37531731899837]
We propose a novel scheme of Cross-Attention Networks (CAN) for automated thoracic disease classification from chest x-ray images.
We also design a new loss function that beyond cross-entropy loss to help cross-attention process and is able to overcome the imbalance between classes and easy-dominated samples within each class.
arXiv Detail & Related papers (2020-07-21T14:37:00Z) - Student-Teacher Curriculum Learning via Reinforcement Learning:
Predicting Hospital Inpatient Admission Location [4.359338565775979]
In this work we propose a student-teacher network via reinforcement learning to deal with this specific problem.
A representation of the weights of the student network is treated as the state and is fed as an input to the teacher network.
The teacher network's action is to select the most appropriate batch of data to train the student network on from a training set sorted according to entropy.
arXiv Detail & Related papers (2020-07-01T15:00:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.