Whole Slide Images are 2D Point Clouds: Context-Aware Survival
Prediction using Patch-based Graph Convolutional Networks
- URL: http://arxiv.org/abs/2107.13048v1
- Date: Tue, 27 Jul 2021 19:17:37 GMT
- Title: Whole Slide Images are 2D Point Clouds: Context-Aware Survival
Prediction using Patch-based Graph Convolutional Networks
- Authors: Richard J. Chen, Ming Y. Lu, Muhammad Shaban, Chengkuan Chen, Tiffany
Y. Chen, Drew F. K. Williamson, Faisal Mahmood
- Abstract summary: We present Patch-GCN, a context-aware, spatially-resolved patch-based graph convolutional network that hierarchically aggregates instance-level histology features.
We demonstrate that Patch-GCN outperforms all prior weakly-supervised approaches by 3.58-9.46%.
- Score: 6.427108174481534
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cancer prognostication is a challenging task in computational pathology that
requires context-aware representations of histology features to adequately
infer patient survival. Despite the advancements made in weakly-supervised deep
learning, many approaches are not context-aware and are unable to model
important morphological feature interactions between cell identities and tissue
types that are prognostic for patient survival. In this work, we present
Patch-GCN, a context-aware, spatially-resolved patch-based graph convolutional
network that hierarchically aggregates instance-level histology features to
model local- and global-level topological structures in the tumor
microenvironment. We validate Patch-GCN with 4,370 gigapixel WSIs across five
different cancer types from the Cancer Genome Atlas (TCGA), and demonstrate
that Patch-GCN outperforms all prior weakly-supervised approaches by
3.58-9.46%. Our code and corresponding models are publicly available at
https://github.com/mahmoodlab/Patch-GCN.
Related papers
- From Pixels to Histopathology: A Graph-Based Framework for Interpretable Whole Slide Image Analysis [81.19923502845441]
We develop a graph-based framework that constructs WSI graph representations.
We build tissue representations (nodes) that follow biological boundaries rather than arbitrary patches.
In our method's final step, we solve the diagnostic task through a graph attention network.
arXiv Detail & Related papers (2025-03-14T20:15:04Z) - A Knowledge-enhanced Pathology Vision-language Foundation Model for Cancer Diagnosis [58.85247337449624]
We propose a knowledge-enhanced vision-language pre-training approach that integrates disease knowledge into the alignment within hierarchical semantic groups.
KEEP achieves state-of-the-art performance in zero-shot cancer diagnostic tasks.
arXiv Detail & Related papers (2024-12-17T17:45:21Z) - Towards a Comprehensive Benchmark for Pathological Lymph Node Metastasis in Breast Cancer Sections [21.75452517154339]
We reprocessed 1,399 whole slide images (WSIs) and labels from the Camelyon-16 and Camelyon-17 datasets.
Based on the sizes of re-annotated tumor regions, we upgraded the binary cancer screening task to a four-class task.
arXiv Detail & Related papers (2024-11-16T09:19:24Z) - Genomics-guided Representation Learning for Pathologic Pan-cancer Tumor Microenvironment Subtype Prediction [7.502459517962686]
We propose PathoTME, a genomics-guided representation learning framework employing Whole Slide Image (WSI) for pan-cancer TME subtypes prediction.
Our model achieves better performance than other state-of-the-art methods across 23 cancer types on TCGA dataset.
arXiv Detail & Related papers (2024-06-10T17:56:21Z) - Pathology-genomic fusion via biologically informed cross-modality graph learning for survival analysis [7.996257103473235]
We propose Pathology-Genome Heterogeneous Graph (PGHG) that integrates whole slide images (WSI) and bulk RNA-Seq expression data with heterogeneous graph neural network for cancer survival analysis.
The PGHG consists of biological knowledge-guided representation learning network and pathology-genome heterogeneous graph.
We evaluate the model on low-grade gliomas, glioblastoma, and kidney renal papillary cell carcinoma datasets from the Cancer Genome Atlas.
arXiv Detail & Related papers (2024-04-11T09:07:40Z) - Context-Aware Self-Supervised Learning of Whole Slide Images [0.0]
A novel two-stage learning technique is presented in this work.
A graph representation capturing all dependencies among regions in the WSI is very intuitive.
The entire slide is presented as a graph, where the nodes correspond to the patches from the WSI.
The proposed framework is then tested using WSIs from prostate and kidney cancers.
arXiv Detail & Related papers (2023-06-07T20:23:05Z) - Hierarchical Transformer for Survival Prediction Using Multimodality
Whole Slide Images and Genomics [63.76637479503006]
Learning good representation of giga-pixel level whole slide pathology images (WSI) for downstream tasks is critical.
This paper proposes a hierarchical-based multimodal transformer framework that learns a hierarchical mapping between pathology images and corresponding genes.
Our architecture requires fewer GPU resources compared with benchmark methods while maintaining better WSI representation ability.
arXiv Detail & Related papers (2022-11-29T23:47:56Z) - Domain Invariant Model with Graph Convolutional Network for Mammogram
Classification [49.691629817104925]
We propose a novel framework, namely Domain Invariant Model with Graph Convolutional Network (DIM-GCN)
We first propose a Bayesian network, which explicitly decomposes the latent variables into disease-related and other disease-irrelevant parts that are provable to be disentangled from each other.
To better capture the macroscopic features, we leverage the observed clinical attributes as a goal for reconstruction, via Graph Convolutional Network (GCN)
arXiv Detail & Related papers (2022-04-21T08:23:44Z) - Lung Cancer Lesion Detection in Histopathology Images Using Graph-Based
Sparse PCA Network [93.22587316229954]
We propose a graph-based sparse principal component analysis (GS-PCA) network, for automated detection of cancerous lesions on histological lung slides stained by hematoxylin and eosin (H&E)
We evaluate the performance of the proposed algorithm on H&E slides obtained from an SVM K-rasG12D lung cancer mouse model using precision/recall rates, F-score, Tanimoto coefficient, and area under the curve (AUC) of the receiver operator characteristic (ROC)
arXiv Detail & Related papers (2021-10-27T19:28:36Z) - Wide & Deep neural network model for patch aggregation in CNN-based
prostate cancer detection systems [51.19354417900591]
Prostate cancer (PCa) is one of the leading causes of death among men, with almost 1.41 million new cases and around 375,000 deaths in 2020.
To perform an automatic diagnosis, prostate tissue samples are first digitized into gigapixel-resolution whole-slide images.
Small subimages called patches are extracted and predicted, obtaining a patch-level classification.
arXiv Detail & Related papers (2021-05-20T18:13:58Z) - G-MIND: An End-to-End Multimodal Imaging-Genetics Framework for
Biomarker Identification and Disease Classification [49.53651166356737]
We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers.
We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data.
arXiv Detail & Related papers (2021-01-27T19:28:04Z) - Gleason Grading of Histology Prostate Images through Semantic
Segmentation via Residual U-Net [60.145440290349796]
The final diagnosis of prostate cancer is based on the visual detection of Gleason patterns in prostate biopsy by pathologists.
Computer-aided-diagnosis systems allow to delineate and classify the cancerous patterns in the tissue.
The methodological core of this work is a U-Net convolutional neural network for image segmentation modified with residual blocks able to segment cancerous tissue.
arXiv Detail & Related papers (2020-05-22T19:49:10Z) - Representation Learning of Histopathology Images using Graph Neural
Networks [12.427740549056288]
We propose a two-stage framework for WSI representation learning.
We sample relevant patches using a color-based method and use graph neural networks to learn relations among sampled patches to aggregate the image information into a single vector representation.
We demonstrate the performance of our approach for discriminating two sub-types of lung cancers, Lung Adenocarcinoma (LUAD) & Lung Squamous Cell Carcinoma (LUSC)
arXiv Detail & Related papers (2020-04-16T00:09:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.