SpikeGraphormer: A High-Performance Graph Transformer with Spiking Graph Attention
- URL: http://arxiv.org/abs/2403.15480v1
- Date: Thu, 21 Mar 2024 03:11:53 GMT
- Title: SpikeGraphormer: A High-Performance Graph Transformer with Spiking Graph Attention
- Authors: Yundong Sun, Dongjie Zhu, Yansong Wang, Zhaoshuo Tian, Ning Cao, Gregory O'Hared,
- Abstract summary: Graph Transformers have emerged as a promising solution to alleviate the inherent limitations of Graph Neural Networks (GNNs)
We propose a novel insight into integrating SNNs with Graph Transformers and design a Spiking Graph Attention (SGA) module.
SpikeGraphormer consistently outperforms existing state-of-the-art approaches across various datasets.
- Score: 1.4126245676224705
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, Graph Transformers have emerged as a promising solution to alleviate the inherent limitations of Graph Neural Networks (GNNs) and enhance graph representation performance. Unfortunately, Graph Transformers are computationally expensive due to the quadratic complexity inherent in self-attention when applied over large-scale graphs, especially for node tasks. In contrast, spiking neural networks (SNNs), with event-driven and binary spikes properties, can perform energy-efficient computation. In this work, we propose a novel insight into integrating SNNs with Graph Transformers and design a Spiking Graph Attention (SGA) module. The matrix multiplication is replaced by sparse addition and mask operations. The linear complexity enables all-pair node interactions on large-scale graphs with limited GPU memory. To our knowledge, our work is the first attempt to introduce SNNs into Graph Transformers. Furthermore, we design SpikeGraphormer, a Dual-branch architecture, combining a sparse GNN branch with our SGA-driven Graph Transformer branch, which can simultaneously perform all-pair node interactions and capture local neighborhoods. SpikeGraphormer consistently outperforms existing state-of-the-art approaches across various datasets and makes substantial improvements in training time, inference time, and GPU memory cost (10 ~ 20x lower than vanilla self-attention). It also performs well in cross-domain applications (image and text classification). We release our code at https://github.com/PHD-lanyu/SpikeGraphormer.
Related papers
- Learning on Large Graphs using Intersecting Communities [13.053266613831447]
MPNNs iteratively update each node's representation in an input graph by aggregating messages from the node's neighbors.
MPNNs might quickly become prohibitive for large graphs provided they are not very sparse.
We propose approximating the input graph as an intersecting community graph (ICG) -- a combination of intersecting cliques.
arXiv Detail & Related papers (2024-05-31T09:26:26Z) - Transferability of Graph Neural Networks using Graphon and Sampling Theories [0.0]
Graph neural networks (GNNs) have become powerful tools for processing graph-based information in various domains.
A desirable property of GNNs is transferability, where a trained network can swap in information from a different graph without retraining and retain its accuracy.
We contribute to the application of graphons to GNNs by presenting an explicit two-layer graphon neural network (WNN) architecture.
arXiv Detail & Related papers (2023-07-25T02:11:41Z) - Graph Mixup with Soft Alignments [49.61520432554505]
We study graph data augmentation by mixup, which has been used successfully on images.
We propose S-Mixup, a simple yet effective mixup method for graph classification by soft alignments.
arXiv Detail & Related papers (2023-06-11T22:04:28Z) - Diffusing Graph Attention [15.013509382069046]
We develop a new model for Graph Transformers that integrates the arbitrary graph structure into the architecture.
GD learns to extract structural and positional relationships between distant nodes in the graph, which it then uses to direct the Transformer's attention and node representation.
Experiments on eight benchmarks show Graph diffuser to be a highly competitive model, outperforming the state-of-the-art in a diverse set of domains.
arXiv Detail & Related papers (2023-03-01T16:11:05Z) - A Generalization of ViT/MLP-Mixer to Graphs [32.86160915431453]
We introduce a new class of GNNs called Graph ViT/MLP-Mixer.
They capture long-range dependency and mitigate the issue of over-squashing.
They offer better speed and memory efficiency with a complexity linear to the number of nodes and edges.
arXiv Detail & Related papers (2022-12-27T03:27:46Z) - Dynamic Graph Message Passing Networks for Visual Recognition [112.49513303433606]
Modelling long-range dependencies is critical for scene understanding tasks in computer vision.
A fully-connected graph is beneficial for such modelling, but its computational overhead is prohibitive.
We propose a dynamic graph message passing network, that significantly reduces the computational complexity.
arXiv Detail & Related papers (2022-09-20T14:41:37Z) - Pure Transformers are Powerful Graph Learners [51.36884247453605]
We show that standard Transformers without graph-specific modifications can lead to promising results in graph learning both in theory and practice.
We prove that this approach is theoretically at least as expressive as an invariant graph network (2-IGN) composed of equivariant linear layers.
Our method coined Tokenized Graph Transformer (TokenGT) achieves significantly better results compared to GNN baselines and competitive results.
arXiv Detail & Related papers (2022-07-06T08:13:06Z) - Neighbor2Seq: Deep Learning on Massive Graphs by Transforming Neighbors
to Sequences [55.329402218608365]
We propose the Neighbor2Seq to transform the hierarchical neighborhood of each node into a sequence.
We evaluate our method on a massive graph with more than 111 million nodes and 1.6 billion edges.
Results show that our proposed method is scalable to massive graphs and achieves superior performance across massive and medium-scale graphs.
arXiv Detail & Related papers (2022-02-07T16:38:36Z) - Graph Transformer Networks: Learning Meta-path Graphs to Improve GNNs [20.85042364993559]
We propose Graph Transformer Networks (GTNs) that generate new graph structures and include useful connections for tasks.
Fast Graph Transformer Networks (FastGTNs) are 230x faster and use 100x less memory.
We extend graph transformations to the semantic proximity of nodes allowing non-local operations beyond meta-paths.
arXiv Detail & Related papers (2021-06-11T07:56:55Z) - GNNAutoScale: Scalable and Expressive Graph Neural Networks via
Historical Embeddings [51.82434518719011]
GNNAutoScale (GAS) is a framework for scaling arbitrary message-passing GNNs to large graphs.
Gas prunes entire sub-trees of the computation graph by utilizing historical embeddings from prior training iterations.
Gas reaches state-of-the-art performance on large-scale graphs.
arXiv Detail & Related papers (2021-06-10T09:26:56Z) - Do Transformers Really Perform Bad for Graph Representation? [62.68420868623308]
We present Graphormer, which is built upon the standard Transformer architecture.
Our key insight to utilizing Transformer in the graph is the necessity of effectively encoding the structural information of a graph into the model.
arXiv Detail & Related papers (2021-06-09T17:18:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.