Segmentation of Cellular Patterns in Confocal Images of Melanocytic
Lesions in vivo via a Multiscale Encoder-Decoder Network (MED-Net)
- URL: http://arxiv.org/abs/2001.01005v1
- Date: Fri, 3 Jan 2020 22:34:52 GMT
- Title: Segmentation of Cellular Patterns in Confocal Images of Melanocytic
Lesions in vivo via a Multiscale Encoder-Decoder Network (MED-Net)
- Authors: Kivanc Kose, Alican Bozkurt, Christi Alessi-Fox, Melissa Gill,
Caterina Longo, Giovanni Pellacani, Jennifer Dy, Dana H. Brooks, Milind
Rajadhyaksha
- Abstract summary: "Multiscale-Decoder Network (MED-Net)" provides pixel-wise labeling into classes of patterns in a quantitative manner.
We trained and tested our model on non-overlapping partitions of 117 reflectance confocal microscopy (RCM) mosaics of melanocytic lesions.
- Score: 2.0487455621441377
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In-vivo optical microscopy is advancing into routine clinical practice for
non-invasively guiding diagnosis and treatment of cancer and other diseases,
and thus beginning to reduce the need for traditional biopsy. However, reading
and analysis of the optical microscopic images are generally still qualitative,
relying mainly on visual examination. Here we present an automated semantic
segmentation method called "Multiscale Encoder-Decoder Network (MED-Net)" that
provides pixel-wise labeling into classes of patterns in a quantitative manner.
The novelty in our approach is the modeling of textural patterns at multiple
scales. This mimics the procedure for examining pathology images, which
routinely starts with low magnification (low resolution, large field of view)
followed by closer inspection of suspicious areas with higher magnification
(higher resolution, smaller fields of view). We trained and tested our model on
non-overlapping partitions of 117 reflectance confocal microscopy (RCM) mosaics
of melanocytic lesions, an extensive dataset for this application, collected at
four clinics in the US, and two in Italy. With patient-wise cross-validation,
we achieved pixel-wise mean sensitivity and specificity of $70\pm11\%$ and
$95\pm2\%$, respectively, with $0.71\pm0.09$ Dice coefficient over six classes.
In the scenario, we partitioned the data clinic-wise and tested the
generalizability of the model over multiple clinics. In this setting, we
achieved pixel-wise mean sensitivity and specificity of $74\%$ and $95\%$,
respectively, with $0.75$ Dice coefficient. We compared MED-Net against the
state-of-the-art semantic segmentation models and achieved better quantitative
segmentation performance. Our results also suggest that, due to its nested
multiscale architecture, the MED-Net model annotated RCM mosaics more
coherently, avoiding unrealistic-fragmented annotations.
Related papers
- A Unified Model for Compressed Sensing MRI Across Undersampling Patterns [69.19631302047569]
Deep neural networks have shown great potential for reconstructing high-fidelity images from undersampled measurements.
Our model is based on neural operators, a discretization-agnostic architecture.
Our inference speed is also 1,400x faster than diffusion methods.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - Histopathological Image Classification with Cell Morphology Aware Deep Neural Networks [11.749248917866915]
We propose a novel DeepCMorph model pre-trained to learn cell morphology and identify a large number of different cancer types.
We pretrained this module on the Pan-Cancer TCGA dataset consisting of over 270K tissue patches extracted from 8736 diagnostic slides from 7175 patients.
The proposed solution achieved a new state-of-the-art performance on the dataset under consideration, detecting 32 cancer types with over 82% accuracy and outperforming all previously proposed solutions by more than 4%.
arXiv Detail & Related papers (2024-07-11T16:03:59Z) - Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images [68.42215385041114]
This paper introduces a novel lightweight multi-level adaptation and comparison framework to repurpose the CLIP model for medical anomaly detection.
Our approach integrates multiple residual adapters into the pre-trained visual encoder, enabling a stepwise enhancement of visual features across different levels.
Our experiments on medical anomaly detection benchmarks demonstrate that our method significantly surpasses current state-of-the-art models.
arXiv Detail & Related papers (2024-03-19T09:28:19Z) - CodaMal: Contrastive Domain Adaptation for Malaria Detection in Low-Cost Microscopes [51.5625352379093]
Malaria is a major health issue worldwide, and its diagnosis requires scalable solutions that can work effectively with low-cost microscopes (LCM)
Deep learning-based methods have shown success in computer-aided diagnosis from microscopic images.
These methods need annotated images that show cells affected by malaria parasites and their life stages.
Annotating images from LCM significantly increases the burden on medical experts compared to annotating images from high-cost microscopes (HCM)
arXiv Detail & Related papers (2024-02-16T06:57:03Z) - Multi-scale Multi-site Renal Microvascular Structures Segmentation for
Whole Slide Imaging in Renal Pathology [4.743463035587953]
We present Omni-Seg, a novel single dynamic network method that capitalizes on multi-site, multi-scale training data.
We train a singular deep network using images from two datasets, HuBMAP and NEPTUNE.
Our proposed method provides renal pathologists with a powerful computational tool for the quantitative analysis of renal microvascular structures.
arXiv Detail & Related papers (2023-08-10T16:26:03Z) - Optimizations of Autoencoders for Analysis and Classification of
Microscopic In Situ Hybridization Images [68.8204255655161]
We propose a deep-learning framework to detect and classify areas of microscopic images with similar levels of gene expression.
The data we analyze requires an unsupervised learning model for which we employ a type of Artificial Neural Network - Deep Learning Autoencoders.
arXiv Detail & Related papers (2023-04-19T13:45:28Z) - Retinal Image Segmentation with Small Datasets [25.095695898777656]
Many eye diseases like Diabetic Macular Edema (DME), Age-related Macular Degeneration (AMD) and Glaucoma manifest in the retina, can cause irreversible blindness or severely impair the central version.
The Optical Coherence Tomography ( OCT), a 3D scan of the retina, can be used to diagnose and monitor changes in the retinal anatomy.
Many Deep Learning (DL) methods have shared the success of developing an automated tool to monitor pathological changes in the retina.
arXiv Detail & Related papers (2023-03-09T08:32:14Z) - AMIGO: Sparse Multi-Modal Graph Transformer with Shared-Context
Processing for Representation Learning of Giga-pixel Images [53.29794593104923]
We present a novel concept of shared-context processing for whole slide histopathology images.
AMIGO uses the celluar graph within the tissue to provide a single representation for a patient.
We show that our model is strongly robust to missing information to an extent that it can achieve the same performance with as low as 20% of the data.
arXiv Detail & Related papers (2023-03-01T23:37:45Z) - Mixed-UNet: Refined Class Activation Mapping for Weakly-Supervised
Semantic Segmentation with Multi-scale Inference [28.409679398886304]
We develop a novel model named Mixed-UNet, which has two parallel branches in the decoding phase.
We evaluate the designed Mixed-UNet against several prevalent deep learning-based segmentation approaches on our dataset collected from the local hospital and public datasets.
arXiv Detail & Related papers (2022-05-06T08:37:02Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - An interpretable classifier for high-resolution breast cancer screening
images utilizing weakly supervised localization [45.00998416720726]
We propose a framework to address the unique properties of medical images.
This model first uses a low-capacity, yet memory-efficient, network on the whole image to identify the most informative regions.
It then applies another higher-capacity network to collect details from chosen regions.
Finally, it employs a fusion module that aggregates global and local information to make a final prediction.
arXiv Detail & Related papers (2020-02-13T15:28:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.