AGFormer: Efficient Graph Representation with Anchor-Graph Transformer
- URL: http://arxiv.org/abs/2305.07521v1
- Date: Fri, 12 May 2023 14:35:42 GMT
- Title: AGFormer: Efficient Graph Representation with Anchor-Graph Transformer
- Authors: Bo Jiang, Fei Xu, Ziyan Zhang, Jin Tang and Feiping Nie
- Abstract summary: We propose a novel graph Transformer architecture, termed Anchor Graph Transformer (AGFormer)
AGFormer first obtains some representative anchors and then converts node-to-node message passing into anchor-to-anchor and anchor-to-node message passing process.
Extensive experiments on several benchmark datasets demonstrate the effectiveness and benefits of proposed AGFormer.
- Score: 95.1825252182316
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To alleviate the local receptive issue of GCN, Transformers have been
exploited to capture the long range dependences of nodes for graph data
representation and learning. However, existing graph Transformers generally
employ regular self-attention module for all node-to-node message passing which
needs to learn the affinities/relationships between all node's pairs, leading
to high computational cost issue. Also, they are usually sensitive to graph
noises. To overcome this issue, we propose a novel graph Transformer
architecture, termed Anchor Graph Transformer (AGFormer), by leveraging an
anchor graph model. To be specific, AGFormer first obtains some representative
anchors and then converts node-to-node message passing into anchor-to-anchor
and anchor-to-node message passing process. Thus, AGFormer performs much more
efficiently and also robustly than regular node-to-node Transformers. Extensive
experiments on several benchmark datasets demonstrate the effectiveness and
benefits of proposed AGFormer.
Related papers
- Leveraging Contrastive Learning for Enhanced Node Representations in Tokenized Graph Transformers [14.123432611346674]
We propose a novel graph Transformer called GCFormer to harness graph information for learning optimal node representations.
GCFormer develops a hybrid token generator to create two types of token sequences, positive and negative, to capture diverse graph information.
A tailored Transformer-based backbone is adopted to learn meaningful node representations from these generated token sequences.
arXiv Detail & Related papers (2024-06-27T15:29:47Z) - NTFormer: A Composite Node Tokenized Graph Transformer for Node Classification [11.451341325579188]
We propose a new graph Transformer called NTFormer to address node classification issues.
New token generator called Node2Par generates various token sequences using different token elements for each node.
Experiments conducted on various benchmark datasets demonstrate the superiority of NTFormer over representative graph Transformers and graph neural networks for node classification.
arXiv Detail & Related papers (2024-06-27T15:16:00Z) - SGFormer: Simplifying and Empowering Transformers for Large-Graph Representations [75.71298846760303]
We show that a one-layer attention can bring up surprisingly competitive performance across node property prediction benchmarks.
We frame the proposed scheme as Simplified Graph Transformers (SGFormer), which is empowered by a simple attention model.
We believe the proposed methodology alone enlightens a new technical path of independent interest for building Transformers on large graphs.
arXiv Detail & Related papers (2023-06-19T08:03:25Z) - Seq-HGNN: Learning Sequential Node Representation on Heterogeneous Graph [57.2953563124339]
We propose a novel heterogeneous graph neural network with sequential node representation, namely Seq-HGNN.
We conduct extensive experiments on four widely used datasets from Heterogeneous Graph Benchmark (HGB) and Open Graph Benchmark (OGB)
arXiv Detail & Related papers (2023-05-18T07:27:18Z) - Dynamic Graph Message Passing Networks for Visual Recognition [112.49513303433606]
Modelling long-range dependencies is critical for scene understanding tasks in computer vision.
A fully-connected graph is beneficial for such modelling, but its computational overhead is prohibitive.
We propose a dynamic graph message passing network, that significantly reduces the computational complexity.
arXiv Detail & Related papers (2022-09-20T14:41:37Z) - Pure Transformers are Powerful Graph Learners [51.36884247453605]
We show that standard Transformers without graph-specific modifications can lead to promising results in graph learning both in theory and practice.
We prove that this approach is theoretically at least as expressive as an invariant graph network (2-IGN) composed of equivariant linear layers.
Our method coined Tokenized Graph Transformer (TokenGT) achieves significantly better results compared to GNN baselines and competitive results.
arXiv Detail & Related papers (2022-07-06T08:13:06Z) - NAGphormer: Neighborhood Aggregation Graph Transformer for Node
Classification in Large Graphs [10.149586598073421]
We propose a Neighborhood Aggregation Graph Transformer (NAGphormer) that is scalable to large graphs with millions of nodes.
NAGphormer constructs tokens for each node by a neighborhood aggregation module called Hop2Token.
We conduct extensive experiments on various popular benchmarks, including six small datasets and three large datasets.
arXiv Detail & Related papers (2022-06-10T07:23:51Z) - Gransformer: Transformer-based Graph Generation [14.161975556325796]
Gransformer is an algorithm based on Transformer for generating graphs.
We modify the Transformer encoder to exploit the structural information of the given graph.
We also introduce a graph-based familiarity measure between node pairs.
arXiv Detail & Related papers (2022-03-25T14:05:12Z) - Graph Highway Networks [77.38665506495553]
Graph Convolution Networks (GCN) are widely used in learning graph representations due to their effectiveness and efficiency.
They suffer from the notorious over-smoothing problem, in which the learned representations converge to alike vectors when many layers are stacked.
We propose Graph Highway Networks (GHNet) which utilize gating units to balance the trade-off between homogeneity and heterogeneity in the GCN learning process.
arXiv Detail & Related papers (2020-04-09T16:26:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.