Graph Transformer Networks: Learning Meta-path Graphs to Improve GNNs
- URL: http://arxiv.org/abs/2106.06218v1
- Date: Fri, 11 Jun 2021 07:56:55 GMT
- Title: Graph Transformer Networks: Learning Meta-path Graphs to Improve GNNs
- Authors: Seongjun Yun, Minbyul Jeong, Sungdong Yoo, Seunghun Lee, Sean S. Yi,
Raehyun Kim, Jaewoo Kang, Hyunwoo J. Kim
- Abstract summary: We propose Graph Transformer Networks (GTNs) that generate new graph structures and include useful connections for tasks.
Fast Graph Transformer Networks (FastGTNs) are 230x faster and use 100x less memory.
We extend graph transformations to the semantic proximity of nodes allowing non-local operations beyond meta-paths.
- Score: 20.85042364993559
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) have been widely applied to various fields due
to their powerful representations of graph-structured data. Despite the success
of GNNs, most existing GNNs are designed to learn node representations on the
fixed and homogeneous graphs. The limitations especially become problematic
when learning representations on a misspecified graph or a heterogeneous graph
that consists of various types of nodes and edges. To address this limitations,
we propose Graph Transformer Networks (GTNs) that are capable of generating new
graph structures, which preclude noisy connections and include useful
connections (e.g., meta-paths) for tasks, while learning effective node
representations on the new graphs in an end-to-end fashion. We further propose
enhanced version of GTNs, Fast Graph Transformer Networks (FastGTNs), that
improve scalability of graph transformations. Compared to GTNs, FastGTNs are
230x faster and use 100x less memory while allowing the identical graph
transformations as GTNs. In addition, we extend graph transformations to the
semantic proximity of nodes allowing non-local operations beyond meta-paths.
Extensive experiments on both homogeneous graphs and heterogeneous graphs show
that GTNs and FastGTNs with non-local operations achieve the state-of-the-art
performance for node classification tasks. The code is available:
https://github.com/seongjunyun/Graph_Transformer_Networks
Related papers
- Learning on Large Graphs using Intersecting Communities [13.053266613831447]
MPNNs iteratively update each node's representation in an input graph by aggregating messages from the node's neighbors.
MPNNs might quickly become prohibitive for large graphs provided they are not very sparse.
We propose approximating the input graph as an intersecting community graph (ICG) -- a combination of intersecting cliques.
arXiv Detail & Related papers (2024-05-31T09:26:26Z) - Spectral Greedy Coresets for Graph Neural Networks [61.24300262316091]
The ubiquity of large-scale graphs in node-classification tasks hinders the real-world applications of Graph Neural Networks (GNNs)
This paper studies graph coresets for GNNs and avoids the interdependence issue by selecting ego-graphs based on their spectral embeddings.
Our spectral greedy graph coreset (SGGC) scales to graphs with millions of nodes, obviates the need for model pre-training, and applies to low-homophily graphs.
arXiv Detail & Related papers (2024-05-27T17:52:12Z) - SpikeGraphormer: A High-Performance Graph Transformer with Spiking Graph Attention [1.4126245676224705]
Graph Transformers have emerged as a promising solution to alleviate the inherent limitations of Graph Neural Networks (GNNs)
We propose a novel insight into integrating SNNs with Graph Transformers and design a Spiking Graph Attention (SGA) module.
SpikeGraphormer consistently outperforms existing state-of-the-art approaches across various datasets.
arXiv Detail & Related papers (2024-03-21T03:11:53Z) - Degree-based stratification of nodes in Graph Neural Networks [66.17149106033126]
We modify the Graph Neural Network (GNN) architecture so that the weight matrices are learned, separately, for the nodes in each group.
This simple-to-implement modification seems to improve performance across datasets and GNN methods.
arXiv Detail & Related papers (2023-12-16T14:09:23Z) - Transferability of Graph Neural Networks using Graphon and Sampling Theories [0.0]
Graph neural networks (GNNs) have become powerful tools for processing graph-based information in various domains.
A desirable property of GNNs is transferability, where a trained network can swap in information from a different graph without retraining and retain its accuracy.
We contribute to the application of graphons to GNNs by presenting an explicit two-layer graphon neural network (WNN) architecture.
arXiv Detail & Related papers (2023-07-25T02:11:41Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Seq-HGNN: Learning Sequential Node Representation on Heterogeneous Graph [57.2953563124339]
We propose a novel heterogeneous graph neural network with sequential node representation, namely Seq-HGNN.
We conduct extensive experiments on four widely used datasets from Heterogeneous Graph Benchmark (HGB) and Open Graph Benchmark (OGB)
arXiv Detail & Related papers (2023-05-18T07:27:18Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - Deformable Graph Convolutional Networks [12.857403315970231]
Graph neural networks (GNNs) have significantly improved representation power for graph-structured data.
In this paper, we propose Deformable Graph Convolutional Networks (Deformable GCNs) that adaptively perform convolution in multiple latent spaces.
Our framework simultaneously learns the node positional embeddings to determine the relations between nodes in an end-to-end fashion.
arXiv Detail & Related papers (2021-12-29T07:55:29Z) - Scalable Graph Neural Networks for Heterogeneous Graphs [12.44278942365518]
Graph neural networks (GNNs) are a popular class of parametric model for learning over graph-structured data.
Recent work has argued that GNNs primarily use the graph for feature smoothing, and have shown competitive results on benchmark tasks.
In this work, we ask whether these results can be extended to heterogeneous graphs, which encode multiple types of relationship between different entities.
arXiv Detail & Related papers (2020-11-19T06:03:35Z) - Graph Contrastive Learning with Augmentations [109.23158429991298]
We propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data.
We show that our framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-10-22T20:13:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.