Deep Graph Neural Networks with Shallow Subgraph Samplers
- URL: http://arxiv.org/abs/2012.01380v2
- Date: Tue, 23 Feb 2021 09:13:54 GMT
- Title: Deep Graph Neural Networks with Shallow Subgraph Samplers
- Authors: Hanqing Zeng, Muhan Zhang, Yinglong Xia, Ajitesh Srivastava, Andrey
Malevich, Rajgopal Kannan, Viktor Prasanna, Long Jin, Ren Chen
- Abstract summary: We propose a simple "deep GNN, shallow sampler" design principle to improve both the GNN accuracy and efficiency.
A properly sampled subgraph may exclude irrelevant or even noisy nodes, and still preserve the critical neighbor features and graph structures.
On the largest public graph dataset, ogbn-papers100M, we achieve state-of-the-art accuracy with an order of magnitude reduction in hardware cost.
- Score: 22.526363992743278
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While Graph Neural Networks (GNNs) are powerful models for learning
representations on graphs, most state-of-the-art models do not have significant
accuracy gain beyond two to three layers. Deep GNNs fundamentally need to
address: 1). expressivity challenge due to oversmoothing, and 2). computation
challenge due to neighborhood explosion. We propose a simple "deep GNN, shallow
sampler" design principle to improve both the GNN accuracy and efficiency -- to
generate representation of a target node, we use a deep GNN to pass messages
only within a shallow, localized subgraph. A properly sampled subgraph may
exclude irrelevant or even noisy nodes, and still preserve the critical
neighbor features and graph structures. The deep GNN then smooths the
informative local signals to enhance feature learning, rather than
oversmoothing the global graph signals into just "white noise". We
theoretically justify why the combination of deep GNNs with shallow samplers
yields the best learning performance. We then propose various sampling
algorithms and neural architecture extensions to achieve good empirical
results. On the largest public graph dataset, ogbn-papers100M, we achieve
state-of-the-art accuracy with an order of magnitude reduction in hardware
cost.
Related papers
- Spatio-Spectral Graph Neural Networks [50.277959544420455]
We propose Spatio-Spectral Graph Networks (S$2$GNNs)
S$2$GNNs combine spatially and spectrally parametrized graph filters.
We show that S$2$GNNs vanquish over-squashing and yield strictly tighter approximation-theoretic error bounds than MPGNNs.
arXiv Detail & Related papers (2024-05-29T14:28:08Z) - Spectral Greedy Coresets for Graph Neural Networks [61.24300262316091]
The ubiquity of large-scale graphs in node-classification tasks hinders the real-world applications of Graph Neural Networks (GNNs)
This paper studies graph coresets for GNNs and avoids the interdependence issue by selecting ego-graphs based on their spectral embeddings.
Our spectral greedy graph coreset (SGGC) scales to graphs with millions of nodes, obviates the need for model pre-training, and applies to low-homophily graphs.
arXiv Detail & Related papers (2024-05-27T17:52:12Z) - Unleash Graph Neural Networks from Heavy Tuning [33.948899558876604]
Graph Neural Networks (GNNs) are deep-learning architectures designed for graph-type data.
We propose a graph conditional latent diffusion framework (GNN-Diff) to generate high-performing GNNs directly by learning from checkpoints saved during a light-tuning coarse search.
arXiv Detail & Related papers (2024-05-21T06:23:47Z) - MAG-GNN: Reinforcement Learning Boosted Graph Neural Network [68.60884768323739]
A particular line of work proposed subgraph GNNs that use subgraph information to improve GNNs' expressivity and achieved great success.
Such effectivity sacrifices the efficiency of GNNs by enumerating all possible subgraphs.
We propose Magnetic Graph Neural Network (MAG-GNN), a reinforcement learning (RL) boosted GNN, to solve the problem.
arXiv Detail & Related papers (2023-10-29T20:32:21Z) - Decoupling the Depth and Scope of Graph Neural Networks [21.273445344654597]
State-of-the-art Graph Neural Networks (GNNs) have limited scalability with respect to the graph and model sizes.
We propose a design principle to decouple the depth and scope of GNNs.
Our design achieves significant accuracy improvement with orders of magnitude reduction in computation and hardware cost.
arXiv Detail & Related papers (2022-01-19T20:52:42Z) - Evaluating Deep Graph Neural Networks [27.902290204531326]
Graph Neural Networks (GNNs) have already been widely applied in various graph mining tasks.
They suffer from the shallow architecture issue, which is the key impediment that hinders the model performance improvement.
We present Deep Graph Multi-Layer Perceptron (DGMLP), a powerful approach that helps guide deep GNN designs.
arXiv Detail & Related papers (2021-08-02T14:55:10Z) - A Unified Lottery Ticket Hypothesis for Graph Neural Networks [82.31087406264437]
We present a unified GNN sparsification (UGS) framework that simultaneously prunes the graph adjacency matrix and the model weights.
We further generalize the popular lottery ticket hypothesis to GNNs for the first time, by defining a graph lottery ticket (GLT) as a pair of core sub-dataset and sparse sub-network.
arXiv Detail & Related papers (2021-02-12T21:52:43Z) - Learning to Drop: Robust Graph Neural Network via Topological Denoising [50.81722989898142]
We propose PTDNet, a parameterized topological denoising network, to improve the robustness and generalization performance of Graph Neural Networks (GNNs)
PTDNet prunes task-irrelevant edges by penalizing the number of edges in the sparsified graph with parameterized networks.
We show that PTDNet can improve the performance of GNNs significantly and the performance gain becomes larger for more noisy datasets.
arXiv Detail & Related papers (2020-11-13T18:53:21Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.