A graph-transformer for whole slide image classification
- URL: http://arxiv.org/abs/2205.09671v1
- Date: Thu, 19 May 2022 16:32:10 GMT
- Title: A graph-transformer for whole slide image classification
- Authors: Yi Zheng, Rushin H. Gindra, Emily J. Green, Eric J. Burks, Margrit
Betke, Jennifer E. Beane, Vijaya B. Kolachalama
- Abstract summary: We present a Graph-Transformer (GT) that fuses a graph-based representation of an whole slide image (WSI) and a vision transformer for processing pathology images, called GTP, to predict disease grade.
Our findings demonstrate GTP as an interpretable and effective deep learning framework for WSI-level classification.
- Score: 11.968797693846476
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning is a powerful tool for whole slide image (WSI) analysis.
Typically, when performing supervised deep learning, a WSI is divided into
small patches, trained and the outcomes are aggregated to estimate disease
grade. However, patch-based methods introduce label noise during training by
assuming that each patch is independent with the same label as the WSI and
neglect overall WSI-level information that is significant in disease grading.
Here we present a Graph-Transformer (GT) that fuses a graph-based
representation of an WSI and a vision transformer for processing pathology
images, called GTP, to predict disease grade. We selected $4,818$ WSIs from the
Clinical Proteomic Tumor Analysis Consortium (CPTAC), the National Lung
Screening Trial (NLST), and The Cancer Genome Atlas (TCGA), and used GTP to
distinguish adenocarcinoma (LUAD) and squamous cell carcinoma (LSCC) from
adjacent non-cancerous tissue (normal). First, using NLST data, we developed a
contrastive learning framework to generate a feature extractor. This allowed us
to compute feature vectors of individual WSI patches, which were used to
represent the nodes of the graph followed by construction of the GTP framework.
Our model trained on the CPTAC data achieved consistently high performance on
three-label classification (normal versus LUAD versus LSCC: mean accuracy$=
91.2$ $\pm$ $2.5\%$) based on five-fold cross-validation, and mean accuracy $=
82.3$ $\pm$ $1.0\%$ on external test data (TCGA). We also introduced a
graph-based saliency mapping technique, called GraphCAM, that can identify
regions that are highly associated with the class label. Our findings
demonstrate GTP as an interpretable and effective deep learning framework for
WSI-level classification.
Related papers
- C2P-GCN: Cell-to-Patch Graph Convolutional Network for Colorectal Cancer Grading [2.570529808612886]
Graph-based learning approaches are increasingly favored for grading colorectal cancer histology images.
Recent graph-based techniques involve dividing whole slide images into smaller or medium-sized patches, and then building graphs on each patch for direct use in training.
This method fails to capture the tissue structure information present in an entire WSI and relies on training from a significantly large dataset of image patches.
We propose a novel cell-to-patch graph convolutional network (C2P-GCN), which is a two-stage graph formation-based approach.
arXiv Detail & Related papers (2024-03-08T00:15:43Z) - A self-supervised framework for learning whole slide representations [52.774822784847565]
We present Slide Pre-trained Transformers (SPT) for gigapixel-scale self-supervision of whole slide images.
We benchmark SPT visual representations on five diagnostic tasks across three biomedical microscopy datasets.
arXiv Detail & Related papers (2024-02-09T05:05:28Z) - Explainable and Position-Aware Learning in Digital Pathology [0.0]
In this work, classification of cancer from WSIs is performed with positional embedding and graph attention.
A comparison of the proposed method with leading approaches in cancer diagnosis and grading verify improved performance.
The identification of cancerous regions in WSIs is another critical task in cancer diagnosis.
arXiv Detail & Related papers (2023-06-14T01:53:17Z) - Context-Aware Self-Supervised Learning of Whole Slide Images [0.0]
A novel two-stage learning technique is presented in this work.
A graph representation capturing all dependencies among regions in the WSI is very intuitive.
The entire slide is presented as a graph, where the nodes correspond to the patches from the WSI.
The proposed framework is then tested using WSIs from prostate and kidney cancers.
arXiv Detail & Related papers (2023-06-07T20:23:05Z) - LESS: Label-efficient Multi-scale Learning for Cytological Whole Slide
Image Screening [19.803614403803962]
We propose a weakly-supervised Label-Efficient WSI Screening method, dubbed LESS, for cytological WSI analysis with only slide-level labels.
We provide appropriate supervision by using slide-level labels to improve the learning of patch-level features.
It outperforms state-of-the-art MIL methods on pathology WSIs and realizes automatic cytological WSI cancer screening.
arXiv Detail & Related papers (2023-06-06T05:09:20Z) - Active Learning Enhances Classification of Histopathology Whole Slide
Images with Attention-based Multiple Instance Learning [48.02011627390706]
We train an attention-based MIL and calculate a confidence metric for every image in the dataset to select the most uncertain WSIs for expert annotation.
With a novel attention guiding loss, this leads to an accuracy boost of the trained models with few regions annotated for each class.
It may in the future serve as an important contribution to train MIL models in the clinically relevant context of cancer classification in histopathology.
arXiv Detail & Related papers (2023-03-02T15:18:58Z) - Weakly Supervised Joint Whole-Slide Segmentation and Classification in
Prostate Cancer [8.790852468118208]
WholeSIGHT is a weakly-supervised method to segment and classify Whole-Slide images.
We evaluated WholeSIGHT on three public prostate cancer WSI datasets.
arXiv Detail & Related papers (2023-01-07T20:38:36Z) - Hierarchical Transformer for Survival Prediction Using Multimodality
Whole Slide Images and Genomics [63.76637479503006]
Learning good representation of giga-pixel level whole slide pathology images (WSI) for downstream tasks is critical.
This paper proposes a hierarchical-based multimodal transformer framework that learns a hierarchical mapping between pathology images and corresponding genes.
Our architecture requires fewer GPU resources compared with benchmark methods while maintaining better WSI representation ability.
arXiv Detail & Related papers (2022-11-29T23:47:56Z) - Lung Cancer Lesion Detection in Histopathology Images Using Graph-Based
Sparse PCA Network [93.22587316229954]
We propose a graph-based sparse principal component analysis (GS-PCA) network, for automated detection of cancerous lesions on histological lung slides stained by hematoxylin and eosin (H&E)
We evaluate the performance of the proposed algorithm on H&E slides obtained from an SVM K-rasG12D lung cancer mouse model using precision/recall rates, F-score, Tanimoto coefficient, and area under the curve (AUC) of the receiver operator characteristic (ROC)
arXiv Detail & Related papers (2021-10-27T19:28:36Z) - DSNet: A Dual-Stream Framework for Weakly-Supervised Gigapixel Pathology
Image Analysis [78.78181964748144]
We present a novel weakly-supervised framework for classifying whole slide images (WSIs)
WSIs are commonly processed by patch-wise classification with patch-level labels.
With image-level labels only, patch-wise classification would be sub-optimal due to inconsistency between the patch appearance and image-level label.
arXiv Detail & Related papers (2021-09-13T09:10:43Z) - Unifying Graph Convolutional Neural Networks and Label Propagation [73.82013612939507]
We study the relationship between LPA and GCN in terms of two aspects: feature/label smoothing and feature/label influence.
Based on our theoretical analysis, we propose an end-to-end model that unifies GCN and LPA for node classification.
Our model can also be seen as learning attention weights based on node labels, which is more task-oriented than existing feature-based attention models.
arXiv Detail & Related papers (2020-02-17T03:23:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.