Deep Interactive Learning-based ovarian cancer segmentation of
H&E-stained whole slide images to study morphological patterns of BRCA
mutation
- URL: http://arxiv.org/abs/2203.15015v1
- Date: Mon, 28 Mar 2022 18:21:17 GMT
- Title: Deep Interactive Learning-based ovarian cancer segmentation of
H&E-stained whole slide images to study morphological patterns of BRCA
mutation
- Authors: David Joon Ho, M. Herman Chui, Chad M. Vanderbilt, Jiwon Jung, Mark E.
Robson, Chan-Sik Park, Jin Roh, Thomas J. Fuchs
- Abstract summary: We propose Deep Interactive Learning with a pretrained segmentation model from a different cancer type to reduce manual annotation time.
We trained an accurate ovarian cancer segmentation model with a pretrained breast segmentation model by 3.5 hours of manual annotation which achieved intersection-over-union of 0.74, recall of 0.86, and precision of 0.84.
- Score: 1.763687468970535
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep learning has been widely used to analyze digitized hematoxylin and eosin
(H&E)-stained histopathology whole slide images. Automated cancer segmentation
using deep learning can be used to diagnose malignancy and to find novel
morphological patterns to predict molecular subtypes. To train pixel-wise
cancer segmentation models, manual annotation from pathologists is generally a
bottleneck due to its time-consuming nature. In this paper, we propose Deep
Interactive Learning with a pretrained segmentation model from a different
cancer type to reduce manual annotation time. Instead of annotating all pixels
from cancer and non-cancer regions on giga-pixel whole slide images, an
iterative process of annotating mislabeled regions from a segmentation model
and training/finetuning the model with the additional annotation can reduce the
time. Especially, employing a pretrained segmentation model can further reduce
the time than starting annotation from scratch. We trained an accurate ovarian
cancer segmentation model with a pretrained breast segmentation model by 3.5
hours of manual annotation which achieved intersection-over-union of 0.74,
recall of 0.86, and precision of 0.84. With automatically extracted high-grade
serous ovarian cancer patches, we attempted to train another deep learning
model to predict BRCA mutation. The segmentation model and code have been
released at https://github.com/MSKCC-Computational-Pathology/DMMN-ovary.
Related papers
- Histopathological Image Classification with Cell Morphology Aware Deep Neural Networks [11.749248917866915]
We propose a novel DeepCMorph model pre-trained to learn cell morphology and identify a large number of different cancer types.
We pretrained this module on the Pan-Cancer TCGA dataset consisting of over 270K tissue patches extracted from 8736 diagnostic slides from 7175 patients.
The proposed solution achieved a new state-of-the-art performance on the dataset under consideration, detecting 32 cancer types with over 82% accuracy and outperforming all previously proposed solutions by more than 4%.
arXiv Detail & Related papers (2024-07-11T16:03:59Z) - Shape Matters: Detecting Vertebral Fractures Using Differentiable
Point-Based Shape Decoding [51.38395069380457]
Degenerative spinal pathologies are highly prevalent among the elderly population.
Timely diagnosis of osteoporotic fractures and other degenerative deformities facilitates proactive measures to mitigate the risk of severe back pain and disability.
In this study, we specifically explore the use of shape auto-encoders for vertebrae.
arXiv Detail & Related papers (2023-12-08T18:11:22Z) - Stain-invariant self supervised learning for histopathology image
analysis [74.98663573628743]
We present a self-supervised algorithm for several classification tasks within hematoxylin and eosin stained images of breast cancer.
Our method achieves the state-of-the-art performance on several publicly available breast cancer datasets.
arXiv Detail & Related papers (2022-11-14T18:16:36Z) - Adapting the Mean Teacher for keypoint-based lung registration under
geometric domain shifts [75.51482952586773]
deep neural networks generally require plenty of labeled training data and are vulnerable to domain shifts between training and test data.
We present a novel approach to geometric domain adaptation for image registration, adapting a model from a labeled source to an unlabeled target domain.
Our method consistently improves on the baseline model by 50%/47% while even matching the accuracy of models trained on target data.
arXiv Detail & Related papers (2022-07-01T12:16:42Z) - Feature-enhanced Adversarial Semi-supervised Semantic Segmentation
Network for Pulmonary Embolism Annotation [6.142272540492936]
This study established a feature-enhanced adversarial semi-supervised semantic segmentation model to automatically annotate pulmonary embolism lesion areas.
In current studies, all of the PEA image segmentation methods are trained by supervised learning.
This study proposed a semi-supervised learning method to make the model applicable to different datasets by adding a small amount of unlabeled images.
arXiv Detail & Related papers (2022-04-08T04:21:02Z) - Automatic size and pose homogenization with spatial transformer network
to improve and accelerate pediatric segmentation [51.916106055115755]
We propose a new CNN architecture that is pose and scale invariant thanks to the use of Spatial Transformer Network (STN)
Our architecture is composed of three sequential modules that are estimated together during training.
We test the proposed method in kidney and renal tumor segmentation on abdominal pediatric CT scanners.
arXiv Detail & Related papers (2021-07-06T14:50:03Z) - Wide & Deep neural network model for patch aggregation in CNN-based
prostate cancer detection systems [51.19354417900591]
Prostate cancer (PCa) is one of the leading causes of death among men, with almost 1.41 million new cases and around 375,000 deaths in 2020.
To perform an automatic diagnosis, prostate tissue samples are first digitized into gigapixel-resolution whole-slide images.
Small subimages called patches are extracted and predicted, obtaining a patch-level classification.
arXiv Detail & Related papers (2021-05-20T18:13:58Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Metastatic Cancer Image Classification Based On Deep Learning Method [7.832709940526033]
We propose a noval method which combines the deep learning algorithm in image classification, the DenseNet169 framework and Rectified Adam optimization algorithm.
Our model achieves superior performance over the other classical convolutional neural networks approaches, such as Vgg19, Resnet34, Resnet50.
arXiv Detail & Related papers (2020-11-13T16:04:39Z) - Detection of prostate cancer in whole-slide images through end-to-end
training with image-level labels [8.851215922158753]
We propose to use a streaming implementation of convolutional layers, to train a modern CNN (ResNet-34) with 21 million parameters end-to-end on 4712 prostate biopsies.
The method enables the use of entire biopsy images at high-resolution directly by reducing the GPU memory requirements by 2.4 TB.
arXiv Detail & Related papers (2020-06-05T12:11:35Z) - An interpretable classifier for high-resolution breast cancer screening
images utilizing weakly supervised localization [45.00998416720726]
We propose a framework to address the unique properties of medical images.
This model first uses a low-capacity, yet memory-efficient, network on the whole image to identify the most informative regions.
It then applies another higher-capacity network to collect details from chosen regions.
Finally, it employs a fusion module that aggregates global and local information to make a final prediction.
arXiv Detail & Related papers (2020-02-13T15:28:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.