Resolution-Based Distillation for Efficient Histology Image
Classification
- URL: http://arxiv.org/abs/2101.04170v1
- Date: Mon, 11 Jan 2021 20:00:35 GMT
- Title: Resolution-Based Distillation for Efficient Histology Image
Classification
- Authors: Joseph DiPalma, Arief A. Suriawinata, Laura J. Tafe, Lorenzo
Torresani, Saeed Hassanpour
- Abstract summary: This paper proposes a novel deep learning-based methodology for improving the computational efficiency of histology image classification.
The proposed approach is robust when used with images that have reduced input resolution and can be trained effectively with limited labeled data.
We evaluate our approach on two histology image datasets associated with celiac disease (CD) and lung adenocarcinoma (LUAD)
- Score: 29.603903713682275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Developing deep learning models to analyze histology images has been
computationally challenging, as the massive size of the images causes excessive
strain on all parts of the computing pipeline. This paper proposes a novel deep
learning-based methodology for improving the computational efficiency of
histology image classification. The proposed approach is robust when used with
images that have reduced input resolution and can be trained effectively with
limited labeled data. Pre-trained on the original high-resolution (HR) images,
our method uses knowledge distillation (KD) to transfer learned knowledge from
a teacher model to a student model trained on the same images at a much lower
resolution. To address the lack of large-scale labeled histology image
datasets, we perform KD in a self-supervised manner. We evaluate our approach
on two histology image datasets associated with celiac disease (CD) and lung
adenocarcinoma (LUAD). Our results show that a combination of KD and
self-supervision allows the student model to approach, and in some cases,
surpass the classification accuracy of the teacher, while being much more
efficient. Additionally, we observe an increase in student classification
performance as the size of the unlabeled dataset increases, indicating that
there is potential to scale further. For the CD data, our model outperforms the
HR teacher model, while needing 4 times fewer computations. For the LUAD data,
our student model results at 1.25x magnification are within 3% of the teacher
model at 10x magnification, with a 64 times computational cost reduction.
Moreover, our CD outcomes benefit from performance scaling with the use of more
unlabeled data. For 0.625x magnification, using unlabeled data improves
accuracy by 4% over the baseline. Thus, our method can improve the feasibility
of deep learning solutions for digital pathology with standard computational
hardware.
Related papers
- Additional Look into GAN-based Augmentation for Deep Learning COVID-19
Image Classification [57.1795052451257]
We study the dependence of the GAN-based augmentation performance on dataset size with a focus on small samples.
We train StyleGAN2-ADA with both sets and then, after validating the quality of generated images, we use trained GANs as one of the augmentations approaches in multi-class classification problems.
The GAN-based augmentation approach is found to be comparable with classical augmentation in the case of medium and large datasets but underperforms in the case of smaller datasets.
arXiv Detail & Related papers (2024-01-26T08:28:13Z) - Learned Image resizing with efficient training (LRET) facilitates
improved performance of large-scale digital histopathology image
classification models [0.0]
Histologic examination plays a crucial role in oncology research and diagnostics.
Current approaches to training deep convolutional neural networks (DCNN) result in suboptimal model performance.
We introduce a novel approach that addresses the main limitations of traditional histopathology classification model training.
arXiv Detail & Related papers (2024-01-19T23:45:47Z) - Attention to detail: inter-resolution knowledge distillation [1.927195358774599]
Development of computer vision solutions for gigapixel images in digital pathology is hampered by the large size of whole slide images.
Recent literature has proposed using knowledge distillation to enhance the model performance at reduced image resolutions.
In this work, we propose to distill this information by incorporating attention maps during training.
arXiv Detail & Related papers (2024-01-11T16:16:20Z) - Iterative-in-Iterative Super-Resolution Biomedical Imaging Using One
Real Image [8.412910029745762]
We propose an approach to train the deep learning-based super-resolution models using only one real image.
We employ a mixed metric of image screening to automatically select images with a distribution similar to ground truth.
After five training iterations, the proposed deep learning-based super-resolution model experienced a 7.5% and 5.49% improvement in structural similarity and peak-signal-to-noise ratio.
arXiv Detail & Related papers (2023-06-26T07:57:03Z) - Performance of GAN-based augmentation for deep learning COVID-19 image
classification [57.1795052451257]
The biggest challenge in the application of deep learning to the medical domain is the availability of training data.
Data augmentation is a typical methodology used in machine learning when confronted with a limited data set.
In this work, a StyleGAN2-ADA model of Generative Adversarial Networks is trained on the limited COVID-19 chest X-ray image set.
arXiv Detail & Related papers (2023-04-18T15:39:58Z) - SSD-KD: A Self-supervised Diverse Knowledge Distillation Method for
Lightweight Skin Lesion Classification Using Dermoscopic Images [62.60956024215873]
Skin cancer is one of the most common types of malignancy, affecting a large population and causing a heavy economic burden worldwide.
Most studies in skin cancer detection keep pursuing high prediction accuracies without considering the limitation of computing resources on portable devices.
This study specifically proposes a novel method, termed SSD-KD, that unifies diverse knowledge into a generic KD framework for skin diseases classification.
arXiv Detail & Related papers (2022-03-22T06:54:29Z) - Categorical Relation-Preserving Contrastive Knowledge Distillation for
Medical Image Classification [75.27973258196934]
We propose a novel Categorical Relation-preserving Contrastive Knowledge Distillation (CRCKD) algorithm, which takes the commonly used mean-teacher model as the supervisor.
With this regularization, the feature distribution of the student model shows higher intra-class similarity and inter-class variance.
With the contribution of the CCD and CRP, our CRCKD algorithm can distill the relational knowledge more comprehensively.
arXiv Detail & Related papers (2021-07-07T13:56:38Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Classification of COVID-19 in CT Scans using Multi-Source Transfer
Learning [91.3755431537592]
We propose the use of Multi-Source Transfer Learning to improve upon traditional Transfer Learning for the classification of COVID-19 from CT scans.
With our multi-source fine-tuning approach, our models outperformed baseline models fine-tuned with ImageNet.
Our best performing model was able to achieve an accuracy of 0.893 and a Recall score of 0.897, outperforming its baseline Recall score by 9.3%.
arXiv Detail & Related papers (2020-09-22T11:53:06Z) - Optimal Transfer Learning Model for Binary Classification of Funduscopic
Images through Simple Heuristics [0.8370915747360484]
We attempt to use deep learning neural networks to diagnose funduscopic images, visual representations of the interior of the eye.
We propose a unifying model for disease classification: low-cost inference of a fundus image to determine whether it is healthy or diseased.
arXiv Detail & Related papers (2020-02-11T03:49:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.