Multi-hop Attention Graph Neural Network
- URL: http://arxiv.org/abs/2009.14332v5
- Date: Wed, 25 Aug 2021 20:51:41 GMT
- Title: Multi-hop Attention Graph Neural Network
- Authors: Guangtao Wang, Rex Ying, Jing Huang, Jure Leskovec
- Abstract summary: Multi-hop Attention Graph Neural Network (MAGNA) is a principled way to incorporate multi-hop context information into every layer of attention computation.
We show that MAGNA captures large-scale structural information in every layer, and has a low-pass effect that eliminates noisy high-frequency information from graph data.
- Score: 70.21119504298078
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-attention mechanism in graph neural networks (GNNs) led to
state-of-the-art performance on many graph representation learning tasks.
Currently, at every layer, attention is computed between connected pairs of
nodes and depends solely on the representation of the two nodes. However, such
attention mechanism does not account for nodes that are not directly connected
but provide important network context. Here we propose Multi-hop Attention
Graph Neural Network (MAGNA), a principled way to incorporate multi-hop context
information into every layer of attention computation. MAGNA diffuses the
attention scores across the network, which increases the receptive field for
every layer of the GNN. Unlike previous approaches, MAGNA uses a diffusion
prior on attention values, to efficiently account for all paths between the
pair of disconnected nodes. We demonstrate in theory and experiments that MAGNA
captures large-scale structural information in every layer, and has a low-pass
effect that eliminates noisy high-frequency information from graph data.
Experimental results on node classification as well as the knowledge graph
completion benchmarks show that MAGNA achieves state-of-the-art results: MAGNA
achieves up to 5.7 percent relative error reduction over the previous
state-of-the-art on Cora, Citeseer, and Pubmed. MAGNA also obtains the best
performance on a large-scale Open Graph Benchmark dataset. On knowledge graph
completion MAGNA advances state-of-the-art on WN18RR and FB15k-237 across four
different performance metrics.
Related papers
- Spectral Greedy Coresets for Graph Neural Networks [61.24300262316091]
The ubiquity of large-scale graphs in node-classification tasks hinders the real-world applications of Graph Neural Networks (GNNs)
This paper studies graph coresets for GNNs and avoids the interdependence issue by selecting ego-graphs based on their spectral embeddings.
Our spectral greedy graph coreset (SGGC) scales to graphs with millions of nodes, obviates the need for model pre-training, and applies to low-homophily graphs.
arXiv Detail & Related papers (2024-05-27T17:52:12Z) - Discovering the Representation Bottleneck of Graph Neural Networks from
Multi-order Interactions [51.597480162777074]
Graph neural networks (GNNs) rely on the message passing paradigm to propagate node features and build interactions.
Recent works point out that different graph learning tasks require different ranges of interactions between nodes.
We study two common graph construction methods in scientific domains, i.e., emphK-nearest neighbor (KNN) graphs and emphfully-connected (FC) graphs.
arXiv Detail & Related papers (2022-05-15T11:38:14Z) - Graph Neural Networks with Feature and Structure Aware Random Walk [7.143879014059894]
We show that in typical heterphilous graphs, the edges may be directed, and whether to treat the edges as is or simply make them undirected greatly affects the performance of the GNN models.
We develop a model that adaptively learns the directionality of the graph, and exploits the underlying long-distance correlations between nodes.
arXiv Detail & Related papers (2021-11-19T08:54:21Z) - Reasoning Graph Networks for Kinship Verification: from Star-shaped to
Hierarchical [85.0376670244522]
We investigate the problem of facial kinship verification by learning hierarchical reasoning graph networks.
We develop a Star-shaped Reasoning Graph Network (S-RGN) to exploit more powerful and flexible capacity.
We also develop a Hierarchical Reasoning Graph Network (H-RGN) to exploit more powerful and flexible capacity.
arXiv Detail & Related papers (2021-09-06T03:16:56Z) - Missing Data Estimation in Temporal Multilayer Position-aware Graph
Neural Network (TMP-GNN) [5.936402320555635]
Temporal Multilayered Position-aware Graph Neural Network (TMP-GNN) is a node embedding approach for dynamic graph.
We evaluate the performance of TMP-GNN on two different representations of temporal multilayered graphs.
We incorporate TMP-GNN into a deep learning framework to estimate missing data and compare the performance with their corresponding competent GNNs.
arXiv Detail & Related papers (2021-08-07T08:32:40Z) - Higher-Order Attribute-Enhancing Heterogeneous Graph Neural Networks [67.25782890241496]
We propose a higher-order Attribute-Enhancing Graph Neural Network (HAEGNN) for heterogeneous network representation learning.
HAEGNN simultaneously incorporates meta-paths and meta-graphs for rich, heterogeneous semantics.
It shows superior performance against the state-of-the-art methods in node classification, node clustering, and visualization.
arXiv Detail & Related papers (2021-04-16T04:56:38Z) - Multilevel Graph Matching Networks for Deep Graph Similarity Learning [79.3213351477689]
We propose a multi-level graph matching network (MGMN) framework for computing the graph similarity between any pair of graph-structured objects.
To compensate for the lack of standard benchmark datasets, we have created and collected a set of datasets for both the graph-graph classification and graph-graph regression tasks.
Comprehensive experiments demonstrate that MGMN consistently outperforms state-of-the-art baseline models on both the graph-graph classification and graph-graph regression tasks.
arXiv Detail & Related papers (2020-07-08T19:48:19Z) - Isometric Graph Neural Networks [5.306334746787569]
We propose a technique to learn Isometric Graph Neural Networks (IGNN)
IGNN requires changing the input representation space and loss function to enable any GNN algorithm to generate representations that reflect distances between nodes.
We observe a consistent and substantial improvement as high as 400% in Kendall's Tau (KT)
arXiv Detail & Related papers (2020-06-16T22:51:13Z) - How hard is to distinguish graphs with graph neural networks? [32.09819774228997]
This study derives hardness results for the classification variant of graph isomorphism in the message-passing model (MPNN)
MPNN encompasses the majority of graph neural networks used today and is universal when nodes are given unique features.
An empirical study involving 12 graph classification tasks and 420 networks reveals strong alignment between actual performance and theoretical predictions.
arXiv Detail & Related papers (2020-05-13T22:28:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.