From Pixels to Histopathology: A Graph-Based Framework for Interpretable Whole Slide Image Analysis
- URL: http://arxiv.org/abs/2503.11846v1
- Date: Fri, 14 Mar 2025 20:15:04 GMT
- Title: From Pixels to Histopathology: A Graph-Based Framework for Interpretable Whole Slide Image Analysis
- Authors: Alexander Weers, Alexander H. Berger, Laurin Lux, Peter Schüffler, Daniel Rueckert, Johannes C. Paetzold,
- Abstract summary: We develop a graph-based framework that constructs WSI graph representations.<n>We build tissue representations (nodes) that follow biological boundaries rather than arbitrary patches.<n>In our method's final step, we solve the diagnostic task through a graph attention network.
- Score: 81.19923502845441
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The histopathological classification of whole-slide images (WSIs) is a fundamental task in digital pathology; yet it requires extensive time and expertise from specialists. While deep learning methods show promising results, they typically process WSIs by dividing them into artificial patches, which inherently prevents a network from learning from the entire image context, disregards natural tissue structures and compromises interpretability. Our method overcomes this limitation through a novel graph-based framework that constructs WSI graph representations. The WSI-graph efficiently captures essential histopathological information in a compact form. We build tissue representations (nodes) that follow biological boundaries rather than arbitrary patches all while providing interpretable features for explainability. Through adaptive graph coarsening guided by learned embeddings, we progressively merge regions while maintaining discriminative local features and enabling efficient global information exchange. In our method's final step, we solve the diagnostic task through a graph attention network. We empirically demonstrate strong performance on multiple challenging tasks such as cancer stage classification and survival prediction, while also identifying predictive factors using Integrated Gradients. Our implementation is publicly available at https://github.com/HistoGraph31/pix2pathology
Related papers
- A self-supervised framework for learning whole slide representations [52.774822784847565]
We present Slide Pre-trained Transformers (SPT) for gigapixel-scale self-supervision of whole slide images.
We benchmark SPT visual representations on five diagnostic tasks across three biomedical microscopy datasets.
arXiv Detail & Related papers (2024-02-09T05:05:28Z) - Graph-level Protein Representation Learning by Structure Knowledge
Refinement [50.775264276189695]
This paper focuses on learning representation on the whole graph level in an unsupervised manner.
We propose a novel framework called Structure Knowledge Refinement (SKR) which uses data structure to determine the probability of whether a pair is positive or negative.
arXiv Detail & Related papers (2024-01-05T09:05:33Z) - Explainable and Position-Aware Learning in Digital Pathology [0.0]
In this work, classification of cancer from WSIs is performed with positional embedding and graph attention.
A comparison of the proposed method with leading approaches in cancer diagnosis and grading verify improved performance.
The identification of cancerous regions in WSIs is another critical task in cancer diagnosis.
arXiv Detail & Related papers (2023-06-14T01:53:17Z) - Context-Aware Self-Supervised Learning of Whole Slide Images [0.0]
A novel two-stage learning technique is presented in this work.
A graph representation capturing all dependencies among regions in the WSI is very intuitive.
The entire slide is presented as a graph, where the nodes correspond to the patches from the WSI.
The proposed framework is then tested using WSIs from prostate and kidney cancers.
arXiv Detail & Related papers (2023-06-07T20:23:05Z) - HistoTransfer: Understanding Transfer Learning for Histopathology [9.231495418218813]
We compare the performance of features extracted from networks trained on ImageNet and histopathology data.
We investigate if features learned using more complex networks lead to gain in performance.
arXiv Detail & Related papers (2021-06-13T18:55:23Z) - Learning Whole-Slide Segmentation from Inexact and Incomplete Labels
using Tissue Graphs [11.315178576537768]
We propose SegGini, a weakly supervised semantic segmentation method using graphs.
SegGini segment arbitrary and large images, scaling from tissue microarray (TMA) to whole slide image (WSI)
arXiv Detail & Related papers (2021-03-04T16:04:24Z) - Multi-Level Graph Convolutional Network with Automatic Graph Learning
for Hyperspectral Image Classification [63.56018768401328]
We propose a Multi-level Graph Convolutional Network (GCN) with Automatic Graph Learning method (MGCN-AGL) for HSI classification.
By employing attention mechanism to characterize the importance among spatially neighboring regions, the most relevant information can be adaptively incorporated to make decisions.
Our MGCN-AGL encodes the long range dependencies among image regions based on the expressive representations that have been produced at local level.
arXiv Detail & Related papers (2020-09-19T09:26:20Z) - Graph Neural Networks for UnsupervisedDomain Adaptation of
Histopathological ImageAnalytics [22.04114134677181]
We present a novel method for the unsupervised domain adaptation for histological image analysis.
It is based on a backbone for embedding images into a feature space, and a graph neural layer for propa-gating the supervision signals of images with labels.
In experiments, our methodachieves state-of-the-art performance on four public datasets.
arXiv Detail & Related papers (2020-08-21T04:53:44Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z) - Structured Landmark Detection via Topology-Adapting Deep Graph Learning [75.20602712947016]
We present a new topology-adapting deep graph learning approach for accurate anatomical facial and medical landmark detection.
The proposed method constructs graph signals leveraging both local image features and global shape features.
Experiments are conducted on three public facial image datasets (WFLW, 300W, and COFW-68) as well as three real-world X-ray medical datasets (Cephalometric (public), Hand and Pelvis)
arXiv Detail & Related papers (2020-04-17T11:55:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.