A Multi-resolution Model for Histopathology Image Classification and
Localization with Multiple Instance Learning
- URL: http://arxiv.org/abs/2011.02679v1
- Date: Thu, 5 Nov 2020 06:42:39 GMT
- Title: A Multi-resolution Model for Histopathology Image Classification and
Localization with Multiple Instance Learning
- Authors: Jiayun Li, Wenyuan Li, Anthony Sisk, Huihui Ye, W. Dean Wallace,
William Speier, Corey W. Arnold
- Abstract summary: We propose a multi-resolution multiple instance learning model that leverages saliency maps to detect suspicious regions for fine-grained grade prediction.
The model is developed on a large-scale prostate biopsy dataset containing 20,229 slides from 830 patients.
The model achieved 92.7% accuracy, 81.8% Cohen's Kappa for benign, low grade (i.e. Grade group 1) and high grade (i.e. Grade group >= 2) prediction, an area under the receiver operating characteristic curve (AUROC) of 98.2% and an average precision (AP) of 97.4%.
- Score: 9.36505887990307
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Histopathological images provide rich information for disease diagnosis.
Large numbers of histopathological images have been digitized into high
resolution whole slide images, opening opportunities in developing
computational image analysis tools to reduce pathologists' workload and
potentially improve inter- and intra- observer agreement. Most previous work on
whole slide image analysis has focused on classification or segmentation of
small pre-selected regions-of-interest, which requires fine-grained annotation
and is non-trivial to extend for large-scale whole slide analysis. In this
paper, we proposed a multi-resolution multiple instance learning model that
leverages saliency maps to detect suspicious regions for fine-grained grade
prediction. Instead of relying on expensive region- or pixel-level annotations,
our model can be trained end-to-end with only slide-level labels. The model is
developed on a large-scale prostate biopsy dataset containing 20,229 slides
from 830 patients. The model achieved 92.7% accuracy, 81.8% Cohen's Kappa for
benign, low grade (i.e. Grade group 1) and high grade (i.e. Grade group >= 2)
prediction, an area under the receiver operating characteristic curve (AUROC)
of 98.2% and an average precision (AP) of 97.4% for differentiating malignant
and benign slides. The model obtained an AUROC of 99.4% and an AP of 99.8% for
cancer detection on an external dataset.
Related papers
- Histopathological Image Classification with Cell Morphology Aware Deep Neural Networks [11.749248917866915]
We propose a novel DeepCMorph model pre-trained to learn cell morphology and identify a large number of different cancer types.
We pretrained this module on the Pan-Cancer TCGA dataset consisting of over 270K tissue patches extracted from 8736 diagnostic slides from 7175 patients.
The proposed solution achieved a new state-of-the-art performance on the dataset under consideration, detecting 32 cancer types with over 82% accuracy and outperforming all previously proposed solutions by more than 4%.
arXiv Detail & Related papers (2024-07-11T16:03:59Z) - Active Learning Enhances Classification of Histopathology Whole Slide
Images with Attention-based Multiple Instance Learning [48.02011627390706]
We train an attention-based MIL and calculate a confidence metric for every image in the dataset to select the most uncertain WSIs for expert annotation.
With a novel attention guiding loss, this leads to an accuracy boost of the trained models with few regions annotated for each class.
It may in the future serve as an important contribution to train MIL models in the clinically relevant context of cancer classification in histopathology.
arXiv Detail & Related papers (2023-03-02T15:18:58Z) - Efficient subtyping of ovarian cancer histopathology whole slide images
using active sampling in multiple instance learning [2.038893829552157]
Discriminative Region Active Sampling for Multiple Instance Learning (DRAS-MIL) is a computationally efficient slide classification method using attention scores to focus sampling on highly discriminative regions.
We show that DRAS-MIL can achieve similar classification performance to exhaustive slide analysis, with a 3-fold cross-validated AUC of 0.8679.
Our approach uses at most 18% as much memory as the standard approach, while taking 33% of the time when evaluating on a GPU and only 14% on a CPU alone.
arXiv Detail & Related papers (2023-02-17T13:28:06Z) - WSSS4LUAD: Grand Challenge on Weakly-supervised Tissue Semantic
Segmentation for Lung Adenocarcinoma [51.50991881342181]
This challenge includes 10,091 patch-level annotations and over 130 million labeled pixels.
First place team achieved mIoU of 0.8413 (tumor: 0.8389, stroma: 0.7931, normal: 0.8919)
arXiv Detail & Related papers (2022-04-13T15:27:05Z) - DenseNet approach to segmentation and classification of dermatoscopic
skin lesions images [0.0]
This paper proposes an improved method for segmentation and classification for skin lesions using two architectures.
The combination of U-Net and DenseNet121 provides acceptable results in dermatoscopic image analysis.
cancerous and non-cancerous samples were detected in DenseNet121 network with 79.49% and 93.11% accuracy respectively.
arXiv Detail & Related papers (2021-10-09T19:12:23Z) - Multi-Scale Input Strategies for Medulloblastoma Tumor Classification
using Deep Transfer Learning [59.30734371401316]
Medulloblastoma is the most common malignant brain cancer among children.
CNN has shown promising results for MB subtype classification.
We study the impact of tile size and input strategy.
arXiv Detail & Related papers (2021-09-14T09:42:37Z) - Fast whole-slide cartography in colon cancer histology using superpixels
and CNN classification [0.22312377591335414]
Whole-slide-images typically have to be divided into smaller patches which are then analyzed individually using machine learning-based approaches.
We propose to subdivide the image into coherent regions prior to classification by grouping visually similar adjacent image pixels into larger segments, i.e. superpixels.
The algorithm has been developed and validated on a dataset of 159 hand-annotated whole-slide-images of colon resections and its performance has been compared to a standard patch-based approach.
arXiv Detail & Related papers (2021-06-30T08:34:06Z) - Wide & Deep neural network model for patch aggregation in CNN-based
prostate cancer detection systems [51.19354417900591]
Prostate cancer (PCa) is one of the leading causes of death among men, with almost 1.41 million new cases and around 375,000 deaths in 2020.
To perform an automatic diagnosis, prostate tissue samples are first digitized into gigapixel-resolution whole-slide images.
Small subimages called patches are extracted and predicted, obtaining a patch-level classification.
arXiv Detail & Related papers (2021-05-20T18:13:58Z) - Y-Net for Chest X-Ray Preprocessing: Simultaneous Classification of
Geometry and Segmentation of Annotations [70.0118756144807]
This work introduces a general pre-processing step for chest x-ray input into machine learning algorithms.
A modified Y-Net architecture based on the VGG11 encoder is used to simultaneously learn geometric orientation and segmentation of radiographs.
Results were evaluated by expert clinicians, with acceptable geometry in 95.8% and annotation mask in 96.2%, compared to 27.0% and 34.9% respectively in control images.
arXiv Detail & Related papers (2020-05-08T02:16:17Z) - An interpretable classifier for high-resolution breast cancer screening
images utilizing weakly supervised localization [45.00998416720726]
We propose a framework to address the unique properties of medical images.
This model first uses a low-capacity, yet memory-efficient, network on the whole image to identify the most informative regions.
It then applies another higher-capacity network to collect details from chosen regions.
Finally, it employs a fusion module that aggregates global and local information to make a final prediction.
arXiv Detail & Related papers (2020-02-13T15:28:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.