Detection of prostate cancer in whole-slide images through end-to-end
training with image-level labels
- URL: http://arxiv.org/abs/2006.03394v1
- Date: Fri, 5 Jun 2020 12:11:35 GMT
- Title: Detection of prostate cancer in whole-slide images through end-to-end
training with image-level labels
- Authors: Hans Pinckaers, Wouter Bulten, Jeroen van der Laak, Geert Litjens
- Abstract summary: We propose to use a streaming implementation of convolutional layers, to train a modern CNN (ResNet-34) with 21 million parameters end-to-end on 4712 prostate biopsies.
The method enables the use of entire biopsy images at high-resolution directly by reducing the GPU memory requirements by 2.4 TB.
- Score: 8.851215922158753
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Prostate cancer is the most prevalent cancer among men in Western countries,
with 1.1 million new diagnoses every year. The gold standard for the diagnosis
of prostate cancer is a pathologists' evaluation of prostate tissue.
To potentially assist pathologists deep-learning-based cancer detection
systems have been developed. Many of the state-of-the-art models are
patch-based convolutional neural networks, as the use of entire scanned slides
is hampered by memory limitations on accelerator cards. Patch-based systems
typically require detailed, pixel-level annotations for effective training.
However, such annotations are seldom readily available, in contrast to the
clinical reports of pathologists, which contain slide-level labels. As such,
developing algorithms which do not require manual pixel-wise annotations, but
can learn using only the clinical report would be a significant advancement for
the field.
In this paper, we propose to use a streaming implementation of convolutional
layers, to train a modern CNN (ResNet-34) with 21 million parameters end-to-end
on 4712 prostate biopsies. The method enables the use of entire biopsy images
at high-resolution directly by reducing the GPU memory requirements by 2.4 TB.
We show that modern CNNs, trained using our streaming approach, can extract
meaningful features from high-resolution images without additional heuristics,
reaching similar performance as state-of-the-art patch-based and
multiple-instance learning methods. By circumventing the need for manual
annotations, this approach can function as a blueprint for other tasks in
histopathological diagnosis.
The source code to reproduce the streaming models is available at
https://github.com/DIAGNijmegen/pathology-streaming-pipeline .
Related papers
- Towards a Comprehensive Benchmark for Pathological Lymph Node Metastasis in Breast Cancer Sections [21.75452517154339]
We reprocessed 1,399 whole slide images (WSIs) and labels from the Camelyon-16 and Camelyon-17 datasets.
Based on the sizes of re-annotated tumor regions, we upgraded the binary cancer screening task to a four-class task.
arXiv Detail & Related papers (2024-11-16T09:19:24Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - WSSS4LUAD: Grand Challenge on Weakly-supervised Tissue Semantic
Segmentation for Lung Adenocarcinoma [51.50991881342181]
This challenge includes 10,091 patch-level annotations and over 130 million labeled pixels.
First place team achieved mIoU of 0.8413 (tumor: 0.8389, stroma: 0.7931, normal: 0.8919)
arXiv Detail & Related papers (2022-04-13T15:27:05Z) - Deep Interactive Learning-based ovarian cancer segmentation of
H&E-stained whole slide images to study morphological patterns of BRCA
mutation [1.763687468970535]
We propose Deep Interactive Learning with a pretrained segmentation model from a different cancer type to reduce manual annotation time.
We trained an accurate ovarian cancer segmentation model with a pretrained breast segmentation model by 3.5 hours of manual annotation which achieved intersection-over-union of 0.74, recall of 0.86, and precision of 0.84.
arXiv Detail & Related papers (2022-03-28T18:21:17Z) - EMT-NET: Efficient multitask network for computer-aided diagnosis of
breast cancer [58.720142291102135]
We propose an efficient and light-weighted learning architecture to classify and segment breast tumors simultaneously.
We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions.
The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively.
arXiv Detail & Related papers (2022-01-13T05:24:40Z) - Bridging the gap between prostate radiology and pathology through
machine learning [2.090877308669147]
We compare different labeling strategies, namely, pathology-confirmed radiologist labels, pathologist labels on whole-mount histopathology images, and lesion-level and pixel-level digital pathologist labels.
We analyse the effects these labels have on the performance of the trained machine learning models.
arXiv Detail & Related papers (2021-12-03T21:38:20Z) - Wide & Deep neural network model for patch aggregation in CNN-based
prostate cancer detection systems [51.19354417900591]
Prostate cancer (PCa) is one of the leading causes of death among men, with almost 1.41 million new cases and around 375,000 deaths in 2020.
To perform an automatic diagnosis, prostate tissue samples are first digitized into gigapixel-resolution whole-slide images.
Small subimages called patches are extracted and predicted, obtaining a patch-level classification.
arXiv Detail & Related papers (2021-05-20T18:13:58Z) - Gleason Grading of Histology Prostate Images through Semantic
Segmentation via Residual U-Net [60.145440290349796]
The final diagnosis of prostate cancer is based on the visual detection of Gleason patterns in prostate biopsy by pathologists.
Computer-aided-diagnosis systems allow to delineate and classify the cancerous patterns in the tissue.
The methodological core of this work is a U-Net convolutional neural network for image segmentation modified with residual blocks able to segment cancerous tissue.
arXiv Detail & Related papers (2020-05-22T19:49:10Z) - Representation Learning of Histopathology Images using Graph Neural
Networks [12.427740549056288]
We propose a two-stage framework for WSI representation learning.
We sample relevant patches using a color-based method and use graph neural networks to learn relations among sampled patches to aggregate the image information into a single vector representation.
We demonstrate the performance of our approach for discriminating two sub-types of lung cancers, Lung Adenocarcinoma (LUAD) & Lung Squamous Cell Carcinoma (LUSC)
arXiv Detail & Related papers (2020-04-16T00:09:20Z) - Stan: Small tumor-aware network for breast ultrasound image segmentation [68.8204255655161]
We propose a novel deep learning architecture called Small Tumor-Aware Network (STAN) to improve the performance of segmenting tumors with different size.
The proposed approach outperformed the state-of-the-art approaches in segmenting small breast tumors.
arXiv Detail & Related papers (2020-02-03T22:25:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.