Recognizing Magnification Levels in Microscopic Snapshots
- URL: http://arxiv.org/abs/2005.03748v1
- Date: Thu, 7 May 2020 20:48:29 GMT
- Title: Recognizing Magnification Levels in Microscopic Snapshots
- Authors: Manit Zaveri, Shivam Kalra, Morteza Babaie, Sultaan Shah, Savvas
Damskinos, Hany Kashani, H.R. Tizhoosh
- Abstract summary: Digital imaging has transformed computer vision and machine learning to new tools for analyzing pathology images.
Recent advances in digital imaging has transformed computer vision and machine learning to new tools for analyzing pathology images.
- Score: 2.6234848946076785
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in digital imaging has transformed computer vision and
machine learning to new tools for analyzing pathology images. This trend could
automate some of the tasks in the diagnostic pathology and elevate the
pathologist workload. The final step of any cancer diagnosis procedure is
performed by the expert pathologist. These experts use microscopes with high
level of optical magnification to observe minute characteristics of the tissue
acquired through biopsy and fixed on glass slides. Switching between different
magnifications, and finding the magnification level at which they identify the
presence or absence of malignant tissues is important. As the majority of
pathologists still use light microscopy, compared to digital scanners, in many
instance a mounted camera on the microscope is used to capture snapshots from
significant field-of-views. Repositories of such snapshots usually do not
contain the magnification information. In this paper, we extract deep features
of the images available on TCGA dataset with known magnification to train a
classifier for magnification recognition. We compared the results with LBP, a
well-known handcrafted feature extraction method. The proposed approach
achieved a mean accuracy of 96% when a multi-layer perceptron was trained as a
classifier.
Related papers
- BlurryScope: a cost-effective and compact scanning microscope for automated HER2 scoring using deep learning on blurry image data [0.0]
"BlurryScope" is a cost-effective and compact solution for automated inspection and analysis of tissue sections.
BlurryScope integrates specialized hardware with a neural network-based model to process motion-red histological images.
arXiv Detail & Related papers (2024-10-23T04:46:36Z) - Style transfer between Microscopy and Magnetic Resonance Imaging via
Generative Adversarial Network in small sample size settings [49.84018914962972]
Cross-modal augmentation of Magnetic Resonance Imaging (MRI) and microscopic imaging based on the same tissue samples is promising.
We tested a method for generating microscopic histological images from MRI scans of the corpus callosum using conditional generative adversarial network (cGAN) architecture.
arXiv Detail & Related papers (2023-10-16T13:58:53Z) - Intra-operative Brain Tumor Detection with Deep Learning-Optimized
Hyperspectral Imaging [37.21885467891782]
Surgery for gliomas (intrinsic brain tumors) is challenging due to the infiltrative nature of the lesion.
No real-time, intra-operative, label-free and wide-field tool is available to assist and guide the surgeon to find the relevant demarcations for these tumors.
We build a deep-learning-based diagnostic tool for cancer resection with potential for intra-operative guidance.
arXiv Detail & Related papers (2023-02-06T15:52:03Z) - Voice-assisted Image Labelling for Endoscopic Ultrasound Classification
using Neural Networks [48.732863591145964]
We propose a multi-modal convolutional neural network architecture that labels endoscopic ultrasound (EUS) images from raw verbal comments provided by a clinician during the procedure.
Our results show a prediction accuracy of 76% at image level on a dataset with 5 different labels.
arXiv Detail & Related papers (2021-10-12T21:22:24Z) - Increasing a microscope's effective field of view via overlapped imaging
and machine learning [4.23935174235373]
This work demonstrates a multi-lens microscopic imaging system that overlaps multiple independent fields of view on a single sensor for high-efficiency automated specimen analysis.
arXiv Detail & Related papers (2021-10-10T22:52:36Z) - Scope2Screen: Focus+Context Techniques for Pathology Tumor Assessment in
Multivariate Image Data [0.0]
Scope2Screen is a scalable software system for focus+context exploration and annotation of whole-slide, high-plex, tissue images.
Our approach scales to analyzing 100GB images of 109 or more pixels per channel, containing millions of cells.
We present interactive lensing techniques that operate at single-cell and tissue levels.
arXiv Detail & Related papers (2021-10-10T18:34:13Z) - MTCD: Cataract Detection via Near Infrared Eye Images [69.62768493464053]
cataract is a common eye disease and one of the leading causes of blindness and vision impairment.
We present a novel algorithm for cataract detection using near-infrared eye images.
Deep learning-based eye segmentation and multitask network classification networks are presented.
arXiv Detail & Related papers (2021-10-06T08:10:28Z) - Learned super resolution ultrasound for improved breast lesion
characterization [52.77024349608834]
Super resolution ultrasound localization microscopy enables imaging of the microvasculature at the capillary level.
In this work we use a deep neural network architecture that makes effective use of signal structure to address these challenges.
By leveraging our trained network, the microvasculature structure is recovered in a short time, without prior PSF knowledge, and without requiring separability of the UCAs.
arXiv Detail & Related papers (2021-07-12T09:04:20Z) - A parameter refinement method for Ptychography based on Deep Learning
concepts [55.41644538483948]
coarse parametrisation in propagation distance, position errors and partial coherence frequently menaces the experiment viability.
A modern Deep Learning framework is used to correct autonomously the setup incoherences, thus improving the quality of a ptychography reconstruction.
We tested our system on both synthetic datasets and also on real data acquired at the TwinMic beamline of the Elettra synchrotron facility.
arXiv Detail & Related papers (2021-05-18T10:15:17Z) - Selecting Regions of Interest in Large Multi-Scale Images for Cancer
Pathology [0.0]
High resolution scans of microscopy slides offer enough information for a cancer pathologist to come to a conclusion regarding cancer presence, subtype, and severity based on measurements of features within the slide image at multiple scales and resolutions.
We explore approaches based on Reinforcement Learning and Beam Search to learn to progressively zoom into the WSI to detect Regions of Interest (ROIs) in liver pathology slides containing one of two types of liver cancer, namely Hepatocellular Carcinoma (HCC) and Cholangiocarcinoma (CC)
These ROIs can then be presented directly to the pathologist to aid in measurement and diagnosis or be used
arXiv Detail & Related papers (2020-07-03T15:27:41Z) - Spectral-Spatial Recurrent-Convolutional Networks for In-Vivo
Hyperspectral Tumor Type Classification [49.32653090178743]
We demonstrate the feasibility of in-vivo tumor type classification using hyperspectral imaging and deep learning.
Our best model achieves an AUC of 76.3%, significantly outperforming previous conventional and deep learning methods.
arXiv Detail & Related papers (2020-07-02T12:00:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.