NAGphormer: Neighborhood Aggregation Graph Transformer for Node
Classification in Large Graphs
- URL: http://arxiv.org/abs/2206.04910v1
- Date: Fri, 10 Jun 2022 07:23:51 GMT
- Title: NAGphormer: Neighborhood Aggregation Graph Transformer for Node
Classification in Large Graphs
- Authors: Jinsong Chen, Kaiyuan Gao, Gaichao Li, Kun He
- Abstract summary: We propose a Neighborhood Aggregation Graph Transformer (NAGphormer) that is scalable to large graphs with millions of nodes.
NAGphormer constructs tokens for each node by a neighborhood aggregation module called Hop2Token.
We conduct extensive experiments on various popular benchmarks, including six small datasets and three large datasets.
- Score: 10.149586598073421
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Transformers have demonstrated superiority on various graph learning
tasks in recent years. However, the complexity of existing Graph Transformers
scales quadratically with the number of nodes, making it hard to scale to
graphs with thousands of nodes. To this end, we propose a Neighborhood
Aggregation Graph Transformer (NAGphormer) that is scalable to large graphs
with millions of nodes. Before feeding the node features into the Transformer
model, NAGphormer constructs tokens for each node by a neighborhood aggregation
module called Hop2Token. For each node, Hop2Token aggregates neighborhood
features from each hop into a representation, and thereby produces a sequence
of token vectors. Subsequently, the resulting sequence of different hop
information serves as input to the Transformer model. By considering each node
as a sequence, NAGphormer could be trained in a mini-batch manner and thus
could scale to large graphs. NAGphormer further develops an attention-based
readout function so as to learn the importance of each hop adaptively. We
conduct extensive experiments on various popular benchmarks, including six
small datasets and three large datasets. The results demonstrate that
NAGphormer consistently outperforms existing Graph Transformers and mainstream
Graph Neural Networks.
Related papers
- NTFormer: A Composite Node Tokenized Graph Transformer for Node Classification [11.451341325579188]
We propose a new graph Transformer called NTFormer to address node classification issues.
New token generator called Node2Par generates various token sequences using different token elements for each node.
Experiments conducted on various benchmark datasets demonstrate the superiority of NTFormer over representative graph Transformers and graph neural networks for node classification.
arXiv Detail & Related papers (2024-06-27T15:16:00Z) - SpikeGraphormer: A High-Performance Graph Transformer with Spiking Graph Attention [1.4126245676224705]
Graph Transformers have emerged as a promising solution to alleviate the inherent limitations of Graph Neural Networks (GNNs)
We propose a novel insight into integrating SNNs with Graph Transformers and design a Spiking Graph Attention (SGA) module.
SpikeGraphormer consistently outperforms existing state-of-the-art approaches across various datasets.
arXiv Detail & Related papers (2024-03-21T03:11:53Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Tokenized Graph Transformer with Neighborhood Augmentation for Node
Classification in Large Graphs [11.868008619702277]
We propose a Neighborhood Aggregation Graph Transformer (NAGphormer) that treats each node as a sequence containing a series of tokens.
Hop2Token aggregates the neighborhood features from different hops into different representations, producing a sequence of token vectors as one input.
In addition, we propose a new data augmentation method called Neighborhood Augmentation (NrAug) based on the output of Hop2Token.
arXiv Detail & Related papers (2023-05-22T03:29:42Z) - Seq-HGNN: Learning Sequential Node Representation on Heterogeneous Graph [57.2953563124339]
We propose a novel heterogeneous graph neural network with sequential node representation, namely Seq-HGNN.
We conduct extensive experiments on four widely used datasets from Heterogeneous Graph Benchmark (HGB) and Open Graph Benchmark (OGB)
arXiv Detail & Related papers (2023-05-18T07:27:18Z) - AGFormer: Efficient Graph Representation with Anchor-Graph Transformer [95.1825252182316]
We propose a novel graph Transformer architecture, termed Anchor Graph Transformer (AGFormer)
AGFormer first obtains some representative anchors and then converts node-to-node message passing into anchor-to-anchor and anchor-to-node message passing process.
Extensive experiments on several benchmark datasets demonstrate the effectiveness and benefits of proposed AGFormer.
arXiv Detail & Related papers (2023-05-12T14:35:42Z) - Graph Mixture of Experts: Learning on Large-Scale Graphs with Explicit
Diversity Modeling [60.0185734837814]
Graph neural networks (GNNs) have found extensive applications in learning from graph data.
To bolster the generalization capacity of GNNs, it has become customary to augment training graph structures with techniques like graph augmentations.
This study introduces the concept of Mixture-of-Experts (MoE) to GNNs, with the aim of augmenting their capacity to adapt to a diverse range of training graph structures.
arXiv Detail & Related papers (2023-04-06T01:09:36Z) - Pure Transformers are Powerful Graph Learners [51.36884247453605]
We show that standard Transformers without graph-specific modifications can lead to promising results in graph learning both in theory and practice.
We prove that this approach is theoretically at least as expressive as an invariant graph network (2-IGN) composed of equivariant linear layers.
Our method coined Tokenized Graph Transformer (TokenGT) achieves significantly better results compared to GNN baselines and competitive results.
arXiv Detail & Related papers (2022-07-06T08:13:06Z) - Neighbor2Seq: Deep Learning on Massive Graphs by Transforming Neighbors
to Sequences [55.329402218608365]
We propose the Neighbor2Seq to transform the hierarchical neighborhood of each node into a sequence.
We evaluate our method on a massive graph with more than 111 million nodes and 1.6 billion edges.
Results show that our proposed method is scalable to massive graphs and achieves superior performance across massive and medium-scale graphs.
arXiv Detail & Related papers (2022-02-07T16:38:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.