Fast whole-slide cartography in colon cancer histology using superpixels
and CNN classification
- URL: http://arxiv.org/abs/2106.15893v1
- Date: Wed, 30 Jun 2021 08:34:06 GMT
- Title: Fast whole-slide cartography in colon cancer histology using superpixels
and CNN classification
- Authors: Frauke Wilm, Michaela Benz, Volker Bruns, Serop Baghdadlian, Jakob
Dexl, David Hartmann, Petr Kuritcyn, Martin Weidenfeller, Thomas Wittenberg,
Susanne Merkel, Arndt Hartmann, Markus Eckstein, Carol I. Geppert
- Abstract summary: Whole-slide-images typically have to be divided into smaller patches which are then analyzed individually using machine learning-based approaches.
We propose to subdivide the image into coherent regions prior to classification by grouping visually similar adjacent image pixels into larger segments, i.e. superpixels.
The algorithm has been developed and validated on a dataset of 159 hand-annotated whole-slide-images of colon resections and its performance has been compared to a standard patch-based approach.
- Score: 0.22312377591335414
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Whole-slide-image cartography is the process of automatically detecting and
outlining different tissue types in digitized histological specimen. This
semantic segmentation provides a basis for many follow-up analyses and can
potentially guide subsequent medical decisions. Due to their large size,
whole-slide-images typically have to be divided into smaller patches which are
then analyzed individually using machine learning-based approaches. Thereby,
local dependencies of image regions get lost and since a whole-slide-image
comprises many thousands of such patches this process is inherently slow. We
propose to subdivide the image into coherent regions prior to classification by
grouping visually similar adjacent image pixels into larger segments, i.e.
superpixels. Afterwards, only a random subset of patches per superpixel is
classified and patch labels are combined into a single superpixel label. The
algorithm has been developed and validated on a dataset of 159 hand-annotated
whole-slide-images of colon resections and its performance has been compared to
a standard patch-based approach. The algorithm shows an average speed-up of 41%
on the test data and the overall accuracy is increased from 93.8% to 95.7%. We
additionally propose a metric for identifying superpixels with an uncertain
classification so they can be excluded from further analysis. Finally, we
evaluate two potential medical applications, namely tumor area estimation
including tumor invasive margin generation and tumor composition analysis.
Related papers
- Efficient Classification of Histopathology Images [5.749787074942512]
We use images with annotated tumor regions to identify a set of tumor patches and a set of benign patches in a cancerous slide.
This creates an important problem during patch-level classification, where the majority of patches from an image labeled as 'cancerous' are actually tumor-free.
arXiv Detail & Related papers (2024-09-08T17:41:04Z) - Hierarchical Transformer for Survival Prediction Using Multimodality
Whole Slide Images and Genomics [63.76637479503006]
Learning good representation of giga-pixel level whole slide pathology images (WSI) for downstream tasks is critical.
This paper proposes a hierarchical-based multimodal transformer framework that learns a hierarchical mapping between pathology images and corresponding genes.
Our architecture requires fewer GPU resources compared with benchmark methods while maintaining better WSI representation ability.
arXiv Detail & Related papers (2022-11-29T23:47:56Z) - Probabilistic Deep Metric Learning for Hyperspectral Image
Classification [91.5747859691553]
This paper proposes a probabilistic deep metric learning framework for hyperspectral image classification.
It aims to predict the category of each pixel for an image captured by hyperspectral sensors.
Our framework can be readily applied to existing hyperspectral image classification methods.
arXiv Detail & Related papers (2022-11-15T17:57:12Z) - Automated SSIM Regression for Detection and Quantification of Motion
Artefacts in Brain MR Images [54.739076152240024]
Motion artefacts in magnetic resonance brain images are a crucial issue.
The assessment of MR image quality is fundamental before proceeding with the clinical diagnosis.
An automated image quality assessment based on the structural similarity index (SSIM) regression has been proposed here.
arXiv Detail & Related papers (2022-06-14T10:16:54Z) - EMT-NET: Efficient multitask network for computer-aided diagnosis of
breast cancer [58.720142291102135]
We propose an efficient and light-weighted learning architecture to classify and segment breast tumors simultaneously.
We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions.
The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively.
arXiv Detail & Related papers (2022-01-13T05:24:40Z) - Wide & Deep neural network model for patch aggregation in CNN-based
prostate cancer detection systems [51.19354417900591]
Prostate cancer (PCa) is one of the leading causes of death among men, with almost 1.41 million new cases and around 375,000 deaths in 2020.
To perform an automatic diagnosis, prostate tissue samples are first digitized into gigapixel-resolution whole-slide images.
Small subimages called patches are extracted and predicted, obtaining a patch-level classification.
arXiv Detail & Related papers (2021-05-20T18:13:58Z) - Gigapixel Histopathological Image Analysis using Attention-based Neural
Networks [7.1715252990097325]
We propose a CNN structure consisting of a compressing path and a learning path.
Our method integrates both global and local information, is flexible with regard to the size of the input images and only requires weak image-level labels.
arXiv Detail & Related papers (2021-01-25T10:18:52Z) - Overcoming the limitations of patch-based learning to detect cancer in
whole slide images [0.15658704610960567]
Whole slide images (WSIs) pose unique challenges when training deep learning models.
We outline the differences between patch or slide-level classification versus methods that need to localize or segment cancer accurately across the whole slide.
We propose a negative data sampling strategy, which drastically reduces the false positive rate and improves each metric pertinent to our problem.
arXiv Detail & Related papers (2020-12-01T16:37:18Z) - A Multi-resolution Model for Histopathology Image Classification and
Localization with Multiple Instance Learning [9.36505887990307]
We propose a multi-resolution multiple instance learning model that leverages saliency maps to detect suspicious regions for fine-grained grade prediction.
The model is developed on a large-scale prostate biopsy dataset containing 20,229 slides from 830 patients.
The model achieved 92.7% accuracy, 81.8% Cohen's Kappa for benign, low grade (i.e. Grade group 1) and high grade (i.e. Grade group >= 2) prediction, an area under the receiver operating characteristic curve (AUROC) of 98.2% and an average precision (AP) of 97.4%.
arXiv Detail & Related papers (2020-11-05T06:42:39Z) - Ink Marker Segmentation in Histopathology Images Using Deep Learning [1.0118241139691948]
We propose to segment the ink-marked areas of pathology patches through a deep network.
A dataset from $79$ whole slide images with $4,305$ patches was created and different networks were trained.
The results showed an FPN model with the EffiecentNet-B3 as the backbone was found to be the superior configuration with an F1 score of $94.53%$.
arXiv Detail & Related papers (2020-10-29T18:09:59Z) - Y-Net for Chest X-Ray Preprocessing: Simultaneous Classification of
Geometry and Segmentation of Annotations [70.0118756144807]
This work introduces a general pre-processing step for chest x-ray input into machine learning algorithms.
A modified Y-Net architecture based on the VGG11 encoder is used to simultaneously learn geometric orientation and segmentation of radiographs.
Results were evaluated by expert clinicians, with acceptable geometry in 95.8% and annotation mask in 96.2%, compared to 27.0% and 34.9% respectively in control images.
arXiv Detail & Related papers (2020-05-08T02:16:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.