Ink Marker Segmentation in Histopathology Images Using Deep Learning
- URL: http://arxiv.org/abs/2010.15865v1
- Date: Thu, 29 Oct 2020 18:09:59 GMT
- Title: Ink Marker Segmentation in Histopathology Images Using Deep Learning
- Authors: Danial Maleki, Mehdi Afshari, Morteza Babaie, H.R. Tizhoosh
- Abstract summary: We propose to segment the ink-marked areas of pathology patches through a deep network.
A dataset from $79$ whole slide images with $4,305$ patches was created and different networks were trained.
The results showed an FPN model with the EffiecentNet-B3 as the backbone was found to be the superior configuration with an F1 score of $94.53%$.
- Score: 1.0118241139691948
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to the recent advancements in machine vision, digital pathology has
gained significant attention. Histopathology images are distinctly rich in
visual information. The tissue glass slide images are utilized for disease
diagnosis. Researchers study many methods to process histopathology images and
facilitate fast and reliable diagnosis; therefore, the availability of
high-quality slides becomes paramount. The quality of the images can be
negatively affected when the glass slides are ink-marked by pathologists to
delineate regions of interest. As an example, in one of the largest public
histopathology datasets, The Cancer Genome Atlas (TCGA), approximately $12\%$
of the digitized slides are affected by manual delineations through ink
markings. To process these open-access slide images and other repositories for
the design and validation of new methods, an algorithm to detect the marked
regions of the images is essential to avoid confusing tissue pixels with
ink-colored pixels for computer methods. In this study, we propose to segment
the ink-marked areas of pathology patches through a deep network. A dataset
from $79$ whole slide images with $4,305$ patches was created and different
networks were trained. Finally, the results showed an FPN model with the
EffiecentNet-B3 as the backbone was found to be the superior configuration with
an F1 score of $94.53\%$.
Related papers
- Cross-modulated Few-shot Image Generation for Colorectal Tissue
Classification [58.147396879490124]
Our few-shot generation method, named XM-GAN, takes one base and a pair of reference tissue images as input and generates high-quality yet diverse images.
To the best of our knowledge, we are the first to investigate few-shot generation in colorectal tissue images.
arXiv Detail & Related papers (2023-04-04T17:50:30Z) - Voice-assisted Image Labelling for Endoscopic Ultrasound Classification
using Neural Networks [48.732863591145964]
We propose a multi-modal convolutional neural network architecture that labels endoscopic ultrasound (EUS) images from raw verbal comments provided by a clinician during the procedure.
Our results show a prediction accuracy of 76% at image level on a dataset with 5 different labels.
arXiv Detail & Related papers (2021-10-12T21:22:24Z) - Assessing glaucoma in retinal fundus photographs using Deep Feature
Consistent Variational Autoencoders [63.391402501241195]
glaucoma is challenging to detect since it remains asymptomatic until the symptoms are severe.
Early identification of glaucoma is generally made based on functional, structural, and clinical assessments.
Deep learning methods have partially solved this dilemma by bypassing the marker identification stage and analyzing high-level information directly to classify the data.
arXiv Detail & Related papers (2021-10-04T16:06:49Z) - Fast whole-slide cartography in colon cancer histology using superpixels
and CNN classification [0.22312377591335414]
Whole-slide-images typically have to be divided into smaller patches which are then analyzed individually using machine learning-based approaches.
We propose to subdivide the image into coherent regions prior to classification by grouping visually similar adjacent image pixels into larger segments, i.e. superpixels.
The algorithm has been developed and validated on a dataset of 159 hand-annotated whole-slide-images of colon resections and its performance has been compared to a standard patch-based approach.
arXiv Detail & Related papers (2021-06-30T08:34:06Z) - Wide & Deep neural network model for patch aggregation in CNN-based
prostate cancer detection systems [51.19354417900591]
Prostate cancer (PCa) is one of the leading causes of death among men, with almost 1.41 million new cases and around 375,000 deaths in 2020.
To perform an automatic diagnosis, prostate tissue samples are first digitized into gigapixel-resolution whole-slide images.
Small subimages called patches are extracted and predicted, obtaining a patch-level classification.
arXiv Detail & Related papers (2021-05-20T18:13:58Z) - Gigapixel Histopathological Image Analysis using Attention-based Neural
Networks [7.1715252990097325]
We propose a CNN structure consisting of a compressing path and a learning path.
Our method integrates both global and local information, is flexible with regard to the size of the input images and only requires weak image-level labels.
arXiv Detail & Related papers (2021-01-25T10:18:52Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Blind deblurring for microscopic pathology images using deep learning
networks [0.0]
We demonstrate a deep-learning-based approach that can alleviate the defocus and motion blur of a microscopic image.
It produces a sharper and cleaner image with retrieved fine details without prior knowledge of the blur type, blur extent and pathological stain.
We test our approach on different types of pathology specimens and demonstrate great performance on image blur correction and the subsequent improvement on the diagnosis outcome of AI algorithms.
arXiv Detail & Related papers (2020-11-24T03:52:45Z) - Y-Net for Chest X-Ray Preprocessing: Simultaneous Classification of
Geometry and Segmentation of Annotations [70.0118756144807]
This work introduces a general pre-processing step for chest x-ray input into machine learning algorithms.
A modified Y-Net architecture based on the VGG11 encoder is used to simultaneously learn geometric orientation and segmentation of radiographs.
Results were evaluated by expert clinicians, with acceptable geometry in 95.8% and annotation mask in 96.2%, compared to 27.0% and 34.9% respectively in control images.
arXiv Detail & Related papers (2020-05-08T02:16:17Z) - Resource-Frugal Classification and Analysis of Pathology Slides Using
Image Entropy [0.0]
Histopathology slides of lung malignancies are classified using resource-frugal convolution neural networks (CNNs)
A lightweight CNN produces tile-level classifications that are aggregated to classify the slide.
color-coded probability maps are created by overlapping tiles and averaging the tile-level probabilities at a pixel level.
arXiv Detail & Related papers (2020-02-16T18:42:36Z) - Breast Cancer Histopathology Image Classification and Localization using
Multiple Instance Learning [2.4178424543973267]
Computer-aided pathology to analyze microscopic histopathology images for diagnosis can bring the cost and delays of diagnosis down.
Deep learning in histopathology has attracted attention over the last decade of achieving state-of-the-art performance in classification and localization tasks.
We present classification and localization results on two publicly available BreakHIS and BACH dataset.
arXiv Detail & Related papers (2020-02-16T10:29:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.