Decoupling the Depth and Scope of Graph Neural Networks
- URL: http://arxiv.org/abs/2201.07858v1
- Date: Wed, 19 Jan 2022 20:52:42 GMT
- Title: Decoupling the Depth and Scope of Graph Neural Networks
- Authors: Hanqing Zeng, Muhan Zhang, Yinglong Xia, Ajitesh Srivastava, Andrey
Malevich, Rajgopal Kannan, Viktor Prasanna, Long Jin, Ren Chen
- Abstract summary: State-of-the-art Graph Neural Networks (GNNs) have limited scalability with respect to the graph and model sizes.
We propose a design principle to decouple the depth and scope of GNNs.
Our design achieves significant accuracy improvement with orders of magnitude reduction in computation and hardware cost.
- Score: 21.273445344654597
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: State-of-the-art Graph Neural Networks (GNNs) have limited scalability with
respect to the graph and model sizes. On large graphs, increasing the model
depth often means exponential expansion of the scope (i.e., receptive field).
Beyond just a few layers, two fundamental challenges emerge: 1. degraded
expressivity due to oversmoothing, and 2. expensive computation due to
neighborhood explosion. We propose a design principle to decouple the depth and
scope of GNNs -- to generate representation of a target entity (i.e., a node or
an edge), we first extract a localized subgraph as the bounded-size scope, and
then apply a GNN of arbitrary depth on top of the subgraph. A properly
extracted subgraph consists of a small number of critical neighbors, while
excluding irrelevant ones. The GNN, no matter how deep it is, smooths the local
neighborhood into informative representation rather than oversmoothing the
global graph into "white noise". Theoretically, decoupling improves the GNN
expressive power from the perspectives of graph signal processing (GCN),
function approximation (GraphSAGE) and topological learning (GIN). Empirically,
on seven graphs (with up to 110M nodes) and six backbone GNN architectures, our
design achieves significant accuracy improvement with orders of magnitude
reduction in computation and hardware cost.
Related papers
- Spatio-Spectral Graph Neural Networks [50.277959544420455]
We propose Spatio-Spectral Graph Networks (S$2$GNNs)
S$2$GNNs combine spatially and spectrally parametrized graph filters.
We show that S$2$GNNs vanquish over-squashing and yield strictly tighter approximation-theoretic error bounds than MPGNNs.
arXiv Detail & Related papers (2024-05-29T14:28:08Z) - Spectral Greedy Coresets for Graph Neural Networks [61.24300262316091]
The ubiquity of large-scale graphs in node-classification tasks hinders the real-world applications of Graph Neural Networks (GNNs)
This paper studies graph coresets for GNNs and avoids the interdependence issue by selecting ego-graphs based on their spectral embeddings.
Our spectral greedy graph coreset (SGGC) scales to graphs with millions of nodes, obviates the need for model pre-training, and applies to low-homophily graphs.
arXiv Detail & Related papers (2024-05-27T17:52:12Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - Feature Correlation Aggregation: on the Path to Better Graph Neural
Networks [37.79964911718766]
Prior to the introduction of Graph Neural Networks (GNNs), modeling and analyzing irregular data, particularly graphs, was thought to be the Achilles' heel of deep learning.
This paper introduces a central node permutation variant function through a frustratingly simple and innocent-looking modification to the core operation of a GNN.
A tangible boost in performance of the model is observed where the model surpasses previous state-of-the-art results by a significant margin while employing fewer parameters.
arXiv Detail & Related papers (2021-09-20T05:04:26Z) - A Unified Lottery Ticket Hypothesis for Graph Neural Networks [82.31087406264437]
We present a unified GNN sparsification (UGS) framework that simultaneously prunes the graph adjacency matrix and the model weights.
We further generalize the popular lottery ticket hypothesis to GNNs for the first time, by defining a graph lottery ticket (GLT) as a pair of core sub-dataset and sparse sub-network.
arXiv Detail & Related papers (2021-02-12T21:52:43Z) - Deep Graph Neural Networks with Shallow Subgraph Samplers [22.526363992743278]
We propose a simple "deep GNN, shallow sampler" design principle to improve both the GNN accuracy and efficiency.
A properly sampled subgraph may exclude irrelevant or even noisy nodes, and still preserve the critical neighbor features and graph structures.
On the largest public graph dataset, ogbn-papers100M, we achieve state-of-the-art accuracy with an order of magnitude reduction in hardware cost.
arXiv Detail & Related papers (2020-12-02T18:23:48Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - Eigen-GNN: A Graph Structure Preserving Plug-in for GNNs [95.63153473559865]
Graph Neural Networks (GNNs) are emerging machine learning models on graphs.
Most existing GNN models in practice are shallow and essentially feature-centric.
We show empirically and analytically that the existing shallow GNNs cannot preserve graph structures well.
We propose Eigen-GNN, a plug-in module to boost GNNs ability in preserving graph structures.
arXiv Detail & Related papers (2020-06-08T02:47:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.