Deep learning model trained on mobile phone-acquired frozen section
images effectively detects basal cell carcinoma
- URL: http://arxiv.org/abs/2011.11081v1
- Date: Sun, 22 Nov 2020 18:30:23 GMT
- Title: Deep learning model trained on mobile phone-acquired frozen section
images effectively detects basal cell carcinoma
- Authors: Junli Cao, B.S., Junyan Wu, M.S., Jing W. Zhang, M.D., Ph.D., Jay J.
Ye, M.D., Ph.D., Limin Yu, M.D., M.S
- Abstract summary: We explore if a deep learning model trained on mobile phone-acquired frozen section images can have adequate performance for future deployment.
The model uses an image as input and produces a 2-dimensional black and white output of prediction of the same dimension.
The model achieves area under curve of 0.99 for receiver operator curve and 0.97 for precision-recall curve at the pixel level.
- Score: 0.728871001316957
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Background: Margin assessment of basal cell carcinoma using the frozen
section is a common task of pathology intraoperative consultation. Although
frequently straight-forward, the determination of the presence or absence of
basal cell carcinoma on the tissue sections can sometimes be challenging. We
explore if a deep learning model trained on mobile phone-acquired frozen
section images can have adequate performance for future deployment. Materials
and Methods: One thousand two hundred and forty-one (1241) images of frozen
sections performed for basal cell carcinoma margin status were acquired using
mobile phones. The photos were taken at 100x magnification (10x objective). The
images were downscaled from a 4032 x 3024 pixel resolution to 576 x 432 pixel
resolution. Semantic segmentation algorithm Deeplab V3 with Xception backbone
was used for model training. Results: The model uses an image as input and
produces a 2-dimensional black and white output of prediction of the same
dimension; the areas determined to be basal cell carcinoma were displayed with
white color, in a black background. Any output with the number of white pixels
exceeding 0.5% of the total number of pixels is deemed positive for basal cell
carcinoma. On the test set, the model achieves area under curve of 0.99 for
receiver operator curve and 0.97 for precision-recall curve at the pixel level.
The accuracy of classification at the slide level is 96%. Conclusions: The deep
learning model trained with mobile phone images shows satisfactory performance
characteristics, and thus demonstrates the potential for deploying as a mobile
phone app to assist in frozen section interpretation in real time.
Related papers
- PixCell: A generative foundation model for digital histopathology images [49.00921097924924]
We introduce PixCell, the first diffusion-based generative foundation model for histopathology.<n>We train PixCell on PanCan-30M, a vast, diverse dataset derived from 69,184 H&E-stained whole slide images covering various cancer types.
arXiv Detail & Related papers (2025-06-05T15:14:32Z) - Cell Culture Assistive Application for Precipitation Image Diagnosis [0.0]
We develop an application to automatically detect precipitation on 384-well plates utilising optical microscope images.
Applying MN-pair contrastive clustering, we extract precipitation classes from approximately 20,000 patch images.
We also build a machine learning pipeline to detect precipitation from the maximum score of quadruplet well images.
arXiv Detail & Related papers (2024-07-29T11:42:32Z) - Deep Learning Algorithms for Early Diagnosis of Acute Lymphoblastic Leukemia [0.0]
Acute lymphoblastic leukemia (ALL) is a form of blood cancer that affects the white blood cells.
In this study, we propose a binary image classification model to assist in the diagnostic process of ALL.
arXiv Detail & Related papers (2024-07-14T15:35:39Z) - Histopathological Image Classification with Cell Morphology Aware Deep Neural Networks [11.749248917866915]
We propose a novel DeepCMorph model pre-trained to learn cell morphology and identify a large number of different cancer types.
We pretrained this module on the Pan-Cancer TCGA dataset consisting of over 270K tissue patches extracted from 8736 diagnostic slides from 7175 patients.
The proposed solution achieved a new state-of-the-art performance on the dataset under consideration, detecting 32 cancer types with over 82% accuracy and outperforming all previously proposed solutions by more than 4%.
arXiv Detail & Related papers (2024-07-11T16:03:59Z) - Corneal endothelium assessment in specular microscopy images with Fuchs'
dystrophy via deep regression of signed distance maps [48.498376125522114]
This paper proposes a UNet-based segmentation approach that requires minimal post-processing.
It achieves reliable CE morphometric assessment and guttae identification across all degrees of Fuchs' dystrophy.
arXiv Detail & Related papers (2022-10-13T15:34:20Z) - Histopathological Imaging Classification of Breast Tissue for Cancer
Diagnosis Support Using Deep Learning Models [0.0]
Hematoxylin and Eosin are considered the gold standard for cancer diagnoses.
Based on the idea of dividing the pathologic image (WSI) into multiple patches, we used the window [512,512] sliding from left to right and sliding from top to bottom, each sliding step overlapping by 50% to augmented data on a dataset of 400 images.
The EffficientNet model is a recently developed model that uniformly scales the width, depth, and resolution of the network with a set of fixed scaling factors that are well suited for training images with high resolution.
arXiv Detail & Related papers (2022-07-03T13:56:44Z) - Development of an algorithm for medical image segmentation of bone
tissue in interaction with metallic implants [58.720142291102135]
This study develops an algorithm for calculating bone growth in contact with metallic implants.
Bone and implant tissue were manually segmented in the training data set.
In terms of network accuracy, the model reached around 98%.
arXiv Detail & Related papers (2022-04-22T08:17:20Z) - Texture Characterization of Histopathologic Images Using Ecological
Diversity Measures and Discrete Wavelet Transform [82.53597363161228]
This paper proposes a method for characterizing texture across histopathologic images with a considerable success rate.
It is possible to quantify the intrinsic properties of such images with promising accuracy on two HI datasets.
arXiv Detail & Related papers (2022-02-27T02:19:09Z) - EMT-NET: Efficient multitask network for computer-aided diagnosis of
breast cancer [58.720142291102135]
We propose an efficient and light-weighted learning architecture to classify and segment breast tumors simultaneously.
We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions.
The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively.
arXiv Detail & Related papers (2022-01-13T05:24:40Z) - Fast whole-slide cartography in colon cancer histology using superpixels
and CNN classification [0.22312377591335414]
Whole-slide-images typically have to be divided into smaller patches which are then analyzed individually using machine learning-based approaches.
We propose to subdivide the image into coherent regions prior to classification by grouping visually similar adjacent image pixels into larger segments, i.e. superpixels.
The algorithm has been developed and validated on a dataset of 159 hand-annotated whole-slide-images of colon resections and its performance has been compared to a standard patch-based approach.
arXiv Detail & Related papers (2021-06-30T08:34:06Z) - Wide & Deep neural network model for patch aggregation in CNN-based
prostate cancer detection systems [51.19354417900591]
Prostate cancer (PCa) is one of the leading causes of death among men, with almost 1.41 million new cases and around 375,000 deaths in 2020.
To perform an automatic diagnosis, prostate tissue samples are first digitized into gigapixel-resolution whole-slide images.
Small subimages called patches are extracted and predicted, obtaining a patch-level classification.
arXiv Detail & Related papers (2021-05-20T18:13:58Z) - ITSELF: Iterative Saliency Estimation fLexible Framework [68.8204255655161]
Saliency object detection estimates the objects that most stand out in an image.
We propose a superpixel-based ITerative Saliency Estimation fLexible Framework (ITSELF) that allows any user-defined assumptions to be added to the model.
We compare ITSELF to two state-of-the-art saliency estimators on five metrics and six datasets.
arXiv Detail & Related papers (2020-06-30T16:51:31Z) - An interpretable classifier for high-resolution breast cancer screening
images utilizing weakly supervised localization [45.00998416720726]
We propose a framework to address the unique properties of medical images.
This model first uses a low-capacity, yet memory-efficient, network on the whole image to identify the most informative regions.
It then applies another higher-capacity network to collect details from chosen regions.
Finally, it employs a fusion module that aggregates global and local information to make a final prediction.
arXiv Detail & Related papers (2020-02-13T15:28:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.