Multi-Scale Input Strategies for Medulloblastoma Tumor Classification
using Deep Transfer Learning
- URL: http://arxiv.org/abs/2109.06547v1
- Date: Tue, 14 Sep 2021 09:42:37 GMT
- Title: Multi-Scale Input Strategies for Medulloblastoma Tumor Classification
using Deep Transfer Learning
- Authors: Marcel Bengs, Satish Pant, Michael Bockmayr, Ulrich Sch\"uller,
Alexander Schlaefer
- Abstract summary: Medulloblastoma is the most common malignant brain cancer among children.
CNN has shown promising results for MB subtype classification.
We study the impact of tile size and input strategy.
- Score: 59.30734371401316
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Medulloblastoma (MB) is a primary central nervous system tumor and the most
common malignant brain cancer among children. Neuropathologists perform
microscopic inspection of histopathological tissue slides under a microscope to
assess the severity of the tumor. This is a time-consuming task and often
infused with observer variability. Recently, pre-trained convolutional neural
networks (CNN) have shown promising results for MB subtype classification.
Typically, high-resolution images are divided into smaller tiles for
classification, while the size of the tiles has not been systematically
evaluated. We study the impact of tile size and input strategy and classify the
two major histopathological subtypes-Classic and Demoplastic/Nodular. To this
end, we use recently proposed EfficientNets and evaluate tiles with increasing
size combined with various downsampling scales. Our results demonstrate using
large input tiles pixels followed by intermediate downsampling and patch
cropping significantly improves MB classification performance. Our
top-performing method achieves the AUC-ROC value of 90.90\% compared to 84.53\%
using the previous approach with smaller input tiles.
Related papers
- Breast Ultrasound Tumor Classification Using a Hybrid Multitask
CNN-Transformer Network [63.845552349914186]
Capturing global contextual information plays a critical role in breast ultrasound (BUS) image classification.
Vision Transformers have an improved capability of capturing global contextual information but may distort the local image patterns due to the tokenization operations.
In this study, we proposed a hybrid multitask deep neural network called Hybrid-MT-ESTAN, designed to perform BUS tumor classification and segmentation.
arXiv Detail & Related papers (2023-08-04T01:19:32Z) - Active Learning Enhances Classification of Histopathology Whole Slide
Images with Attention-based Multiple Instance Learning [48.02011627390706]
We train an attention-based MIL and calculate a confidence metric for every image in the dataset to select the most uncertain WSIs for expert annotation.
With a novel attention guiding loss, this leads to an accuracy boost of the trained models with few regions annotated for each class.
It may in the future serve as an important contribution to train MIL models in the clinically relevant context of cancer classification in histopathology.
arXiv Detail & Related papers (2023-03-02T15:18:58Z) - Medulloblastoma Tumor Classification using Deep Transfer Learning with
Multi-Scale EfficientNets [63.62764375279861]
We propose an end-to-end MB tumor classification and explore transfer learning with various input sizes and matching network dimensions.
Using a data set with 161 cases, we demonstrate that pre-trained EfficientNets with larger input resolutions lead to significant performance improvements.
arXiv Detail & Related papers (2021-09-10T13:07:11Z) - Triplet Contrastive Learning for Brain Tumor Classification [99.07846518148494]
We present a novel approach of directly learning deep embeddings for brain tumor types, which can be used for downstream tasks such as classification.
We evaluate our method on an extensive brain tumor dataset which consists of 27 different tumor classes, out of which 13 are defined as rare.
arXiv Detail & Related papers (2021-08-08T11:26:34Z) - Wide & Deep neural network model for patch aggregation in CNN-based
prostate cancer detection systems [51.19354417900591]
Prostate cancer (PCa) is one of the leading causes of death among men, with almost 1.41 million new cases and around 375,000 deaths in 2020.
To perform an automatic diagnosis, prostate tissue samples are first digitized into gigapixel-resolution whole-slide images.
Small subimages called patches are extracted and predicted, obtaining a patch-level classification.
arXiv Detail & Related papers (2021-05-20T18:13:58Z) - A Multi-Scale Conditional Deep Model for Tumor Cell Ratio Counting [4.164451715899639]
We propose a method to accurately obtain the ratio of tumor cells over an entire histological slide.
We use deep fully convolutional neural network models trained to detect and classify cells on images of H&E-stained tissue sections.
We show that combining two models, each working at a different magnification allows the system to capture both cell-level details and surrounding context.
arXiv Detail & Related papers (2021-01-27T22:40:33Z) - Investigating and Exploiting Image Resolution for Transfer
Learning-based Skin Lesion Classification [3.110738188734789]
Fine-tuning pre-trained convolutional neural networks (CNNs) has been shown to work well for skin lesion classification.
In this paper, we explore the effect of input image size on skin lesion classification performance of fine-tuned CNNs.
Our results show that using very small images (of size 64x64 pixels) degrades the classification performance, while images of size 128x128 pixels support good performance with larger image sizes leading to slightly improved classification.
arXiv Detail & Related papers (2020-06-25T21:51:24Z) - A Two-Stage Multiple Instance Learning Framework for the Detection of
Breast Cancer in Mammograms [13.842620686759616]
Mammograms are commonly employed in the large scale screening of breast cancer.
We propose a two-stage Multiple Instance Learning framework for image-level detection of malignancy.
A global image-level feature is computed as a weighted average of patch-level features learned using a CNN.
Our method performed well on the task of localization of masses with an average Precision/Recall of 0.76/0.80 and acheived an average AUC of 0.91 on the imagelevel classification task.
arXiv Detail & Related papers (2020-04-24T13:06:47Z) - Resource-Frugal Classification and Analysis of Pathology Slides Using
Image Entropy [0.0]
Histopathology slides of lung malignancies are classified using resource-frugal convolution neural networks (CNNs)
A lightweight CNN produces tile-level classifications that are aggregated to classify the slide.
color-coded probability maps are created by overlapping tiles and averaging the tile-level probabilities at a pixel level.
arXiv Detail & Related papers (2020-02-16T18:42:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.