Cell Graph Transformer for Nuclei Classification
- URL: http://arxiv.org/abs/2402.12946v1
- Date: Tue, 20 Feb 2024 12:01:30 GMT
- Title: Cell Graph Transformer for Nuclei Classification
- Authors: Wei Lou, Guanbin Li, Xiang Wan, Haofeng Li
- Abstract summary: We develop a cell graph transformer (CGT) that treats nodes and edges as input tokens to enable learnable adjacency and information exchange among all nodes.
Poorly features can lead to noisy self-attention scores and inferior convergence.
We propose a novel topology-aware pretraining method that leverages a graph convolutional network (GCN) to learn a feature extractor.
- Score: 78.47566396839628
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nuclei classification is a critical step in computer-aided diagnosis with
histopathology images. In the past, various methods have employed graph neural
networks (GNN) to analyze cell graphs that model inter-cell relationships by
considering nuclei as vertices. However, they are limited by the GNN mechanism
that only passes messages among local nodes via fixed edges. To address the
issue, we develop a cell graph transformer (CGT) that treats nodes and edges as
input tokens to enable learnable adjacency and information exchange among all
nodes. Nevertheless, training the transformer with a cell graph presents
another challenge. Poorly initialized features can lead to noisy self-attention
scores and inferior convergence, particularly when processing the cell graphs
with numerous connections. Thus, we further propose a novel topology-aware
pretraining method that leverages a graph convolutional network (GCN) to learn
a feature extractor. The pre-trained features may suppress unreasonable
correlations and hence ease the finetuning of CGT. Experimental results suggest
that the proposed cell graph transformer with topology-aware pretraining
significantly improves the nuclei classification results, and achieves the
state-of-the-art performance. Code and models are available at
https://github.com/lhaof/CGT
Related papers
- Scalable Graph Compressed Convolutions [68.85227170390864]
We propose a differentiable method that applies permutations to calibrate input graphs for Euclidean convolution.
Based on the graph calibration, we propose the Compressed Convolution Network (CoCN) for hierarchical graph representation learning.
arXiv Detail & Related papers (2024-07-26T03:14:13Z) - Learning to Approximate Adaptive Kernel Convolution on Graphs [4.434835769977399]
We propose a diffusion learning framework, where the range of feature aggregation is controlled by the scale of a diffusion kernel.
Our model is tested on various standard for node-wise classification for the state-of-the-art datasets performance.
It is also validated on a real-world brain network data for graph classifications to demonstrate its practicality for Alzheimer classification.
arXiv Detail & Related papers (2024-01-22T10:57:11Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - From Latent Graph to Latent Topology Inference: Differentiable Cell
Complex Module [21.383018558790674]
Differentiable Cell Complex Module (DCM) is a novel learnable function that computes cell probabilities in the complex to improve the downstream task.
We show how to integrate DCM with cell complex message passing networks layers and train it in a end-to-end fashion.
Our model is tested on several homophilic and heterophilic graph datasets and it is shown to outperform other state-of-the-art techniques.
arXiv Detail & Related papers (2023-05-25T15:33:19Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Adaptive Kernel Graph Neural Network [21.863238974404474]
Graph neural networks (GNNs) have demonstrated great success in representation learning for graph-structured data.
In this paper, we propose a novel framework - i.e., namely Adaptive Kernel Graph Neural Network (AKGNN)
AKGNN learns to adapt to the optimal graph kernel in a unified manner at the first attempt.
Experiments are conducted on acknowledged benchmark datasets and promising results demonstrate the outstanding performance of our proposed AKGNN.
arXiv Detail & Related papers (2021-12-08T20:23:58Z) - Spatio-Temporal Inception Graph Convolutional Networks for
Skeleton-Based Action Recognition [126.51241919472356]
We design a simple and highly modularized graph convolutional network architecture for skeleton-based action recognition.
Our network is constructed by repeating a building block that aggregates multi-granularity information from both the spatial and temporal paths.
arXiv Detail & Related papers (2020-11-26T14:43:04Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z) - Self-supervised edge features for improved Graph Neural Network training [8.980876474818153]
We present a framework for creating new edge features, applicable to any domain, via a combination of self-supervised and unsupervised learning.
We validate our work on three biological datasets comprising of single-cell RNA sequencing data of neurological disease, textitin vitro SARS-CoV-2 infection, and human COVID-19 patients.
arXiv Detail & Related papers (2020-06-23T20:18:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.