Few Shot Learning for the Classification of Confocal Laser
Endomicroscopy Images of Head and Neck Tumors
- URL: http://arxiv.org/abs/2311.07216v1
- Date: Mon, 13 Nov 2023 10:17:00 GMT
- Title: Few Shot Learning for the Classification of Confocal Laser
Endomicroscopy Images of Head and Neck Tumors
- Authors: Marc Aubreville, Zhaoya Pan, Matti Sievert, Jonas Ammeling, Jonathan
Ganz, Nicolai Oetter, Florian Stelzle, Ann-Kathrin Frenken, Katharina
Breininger, and Miguel Goncalves
- Abstract summary: We evaluate four popular few shot learning methods towards their capability of generalizing to unseen anatomical domains in CLE images.
We evaluate this on images of sinunasal tumors (SNT) from five patients and on images of the vocal folds (VF) from 11 patients using a cross-validation scheme.
- Score: 0.33384431000910975
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The surgical removal of head and neck tumors requires safe margins, which are
usually confirmed intraoperatively by means of frozen sections. This method is,
in itself, an oversampling procedure, which has a relatively low sensitivity
compared to the definitive tissue analysis on paraffin-embedded sections.
Confocal laser endomicroscopy (CLE) is an in-vivo imaging technique that has
shown its potential in the live optical biopsy of tissue. An automated analysis
of this notoriously difficult to interpret modality would help surgeons.
However, the images of CLE show a wide variability of patterns, caused both by
individual factors but also, and most strongly, by the anatomical structures of
the imaged tissue, making it a challenging pattern recognition task. In this
work, we evaluate four popular few shot learning (FSL) methods towards their
capability of generalizing to unseen anatomical domains in CLE images. We
evaluate this on images of sinunasal tumors (SNT) from five patients and on
images of the vocal folds (VF) from 11 patients using a cross-validation
scheme. The best respective approach reached a median accuracy of 79.6% on the
rather homogeneous VF dataset, but only of 61.6% for the highly diverse SNT
dataset. Our results indicate that FSL on CLE images is viable, but strongly
affected by the number of patients, as well as the diversity of anatomical
patterns.
Related papers
- Style transfer between Microscopy and Magnetic Resonance Imaging via
Generative Adversarial Network in small sample size settings [49.84018914962972]
Cross-modal augmentation of Magnetic Resonance Imaging (MRI) and microscopic imaging based on the same tissue samples is promising.
We tested a method for generating microscopic histological images from MRI scans of the corpus callosum using conditional generative adversarial network (cGAN) architecture.
arXiv Detail & Related papers (2023-10-16T13:58:53Z) - A Cascaded Approach for ultraly High Performance Lesion Detection and
False Positive Removal in Liver CT Scans [15.352636778576171]
Liver cancer has high morbidity and mortality rates in the world.
Automatically detecting and classifying liver lesions in CT images have the potential to improve the clinical workflow.
In this work, we customize a multi-object labeling tool for multi-phase CT images.
arXiv Detail & Related papers (2023-06-28T09:11:34Z) - Intra-operative Brain Tumor Detection with Deep Learning-Optimized
Hyperspectral Imaging [37.21885467891782]
Surgery for gliomas (intrinsic brain tumors) is challenging due to the infiltrative nature of the lesion.
No real-time, intra-operative, label-free and wide-field tool is available to assist and guide the surgeon to find the relevant demarcations for these tumors.
We build a deep-learning-based diagnostic tool for cancer resection with potential for intra-operative guidance.
arXiv Detail & Related papers (2023-02-06T15:52:03Z) - FetReg2021: A Challenge on Placental Vessel Segmentation and
Registration in Fetoscopy [52.3219875147181]
Fetoscopic laser photocoagulation is a widely adopted procedure for treating Twin-to-Twin Transfusion Syndrome (TTTS)
The procedure is particularly challenging due to the limited field of view, poor manoeuvrability of the fetoscope, poor visibility, and variability in illumination.
Computer-assisted intervention (CAI) can provide surgeons with decision support and context awareness by identifying key structures in the scene and expanding the fetoscopic field of view through video mosaicking.
Seven teams participated in this challenge and their model performance was assessed on an unseen test dataset of 658 pixel-annotated images from 6 fet
arXiv Detail & Related papers (2022-06-24T23:44:42Z) - Image translation of Ultrasound to Pseudo Anatomical Display Using
Artificial Intelligence [0.0]
CycleGAN was used to learn each domain properties separately and enforce cross domain cycle consistency.
The generated pseudo anatomical images provide improved visual discrimination of the lesions with clearer border definition and pronounced contrast.
arXiv Detail & Related papers (2022-02-16T13:31:49Z) - Learned super resolution ultrasound for improved breast lesion
characterization [52.77024349608834]
Super resolution ultrasound localization microscopy enables imaging of the microvasculature at the capillary level.
In this work we use a deep neural network architecture that makes effective use of signal structure to address these challenges.
By leveraging our trained network, the microvasculature structure is recovered in a short time, without prior PSF knowledge, and without requiring separability of the UCAs.
arXiv Detail & Related papers (2021-07-12T09:04:20Z) - Spectral-Spatial Recurrent-Convolutional Networks for In-Vivo
Hyperspectral Tumor Type Classification [49.32653090178743]
We demonstrate the feasibility of in-vivo tumor type classification using hyperspectral imaging and deep learning.
Our best model achieves an AUC of 76.3%, significantly outperforming previous conventional and deep learning methods.
arXiv Detail & Related papers (2020-07-02T12:00:53Z) - Harvesting, Detecting, and Characterizing Liver Lesions from Large-scale
Multi-phase CT Data via Deep Dynamic Texture Learning [24.633802585888812]
We propose a fully-automated and multi-stage liver tumor characterization framework for dynamic contrast computed tomography (CT)
Our system comprises four sequential processes of tumor proposal detection, tumor harvesting, primary tumor site selection, and deep texture-based tumor characterization.
arXiv Detail & Related papers (2020-06-28T19:55:34Z) - Co-Heterogeneous and Adaptive Segmentation from Multi-Source and
Multi-Phase CT Imaging Data: A Study on Pathological Liver and Lesion
Segmentation [48.504790189796836]
We present a novel segmentation strategy, co-heterogenous and adaptive segmentation (CHASe)
We propose a versatile framework that fuses appearance based semi-supervision, mask based adversarial domain adaptation, and pseudo-labeling.
CHASe can further improve pathological liver mask Dice-Sorensen coefficients by ranges of $4.2% sim 9.4%$.
arXiv Detail & Related papers (2020-05-27T06:58:39Z) - Synergistic Learning of Lung Lobe Segmentation and Hierarchical
Multi-Instance Classification for Automated Severity Assessment of COVID-19
in CT Images [61.862364277007934]
We propose a synergistic learning framework for automated severity assessment of COVID-19 in 3D CT images.
A multi-task deep network (called M$2$UNet) is then developed to assess the severity of COVID-19 patients.
Our M$2$UNet consists of a patch-level encoder, a segmentation sub-network for lung lobe segmentation, and a classification sub-network for severity assessment.
arXiv Detail & Related papers (2020-05-08T03:16:15Z) - Segmentation of Cellular Patterns in Confocal Images of Melanocytic
Lesions in vivo via a Multiscale Encoder-Decoder Network (MED-Net) [2.0487455621441377]
"Multiscale-Decoder Network (MED-Net)" provides pixel-wise labeling into classes of patterns in a quantitative manner.
We trained and tested our model on non-overlapping partitions of 117 reflectance confocal microscopy (RCM) mosaics of melanocytic lesions.
arXiv Detail & Related papers (2020-01-03T22:34:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.