Evaluating Deep Graph Neural Networks
- URL: http://arxiv.org/abs/2108.00955v1
- Date: Mon, 2 Aug 2021 14:55:10 GMT
- Title: Evaluating Deep Graph Neural Networks
- Authors: Wentao Zhang, Zeang Sheng, Yuezihan Jiang, Yikuan Xia, Jun Gao, Zhi
Yang, Bin Cui
- Abstract summary: Graph Neural Networks (GNNs) have already been widely applied in various graph mining tasks.
They suffer from the shallow architecture issue, which is the key impediment that hinders the model performance improvement.
We present Deep Graph Multi-Layer Perceptron (DGMLP), a powerful approach that helps guide deep GNN designs.
- Score: 27.902290204531326
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have already been widely applied in various
graph mining tasks. However, they suffer from the shallow architecture issue,
which is the key impediment that hinders the model performance improvement.
Although several relevant approaches have been proposed, none of the existing
studies provides an in-depth understanding of the root causes of performance
degradation in deep GNNs. In this paper, we conduct the first systematic
experimental evaluation to present the fundamental limitations of shallow
architectures. Based on the experimental results, we answer the following two
essential questions: (1) what actually leads to the compromised performance of
deep GNNs; (2) when we need and how to build deep GNNs. The answers to the
above questions provide empirical insights and guidelines for researchers to
design deep and well-performed GNNs. To show the effectiveness of our proposed
guidelines, we present Deep Graph Multi-Layer Perceptron (DGMLP), a powerful
approach (a paradigm in its own right) that helps guide deep GNN designs.
Experimental results demonstrate three advantages of DGMLP: 1) high accuracy --
it achieves state-of-the-art node classification performance on various
datasets; 2) high flexibility -- it can flexibly choose different propagation
and transformation depths according to graph size and sparsity; 3) high
scalability and efficiency -- it supports fast training on large-scale graphs.
Our code is available in https://github.com/zwt233/DGMLP.
Related papers
- Spectral Greedy Coresets for Graph Neural Networks [61.24300262316091]
The ubiquity of large-scale graphs in node-classification tasks hinders the real-world applications of Graph Neural Networks (GNNs)
This paper studies graph coresets for GNNs and avoids the interdependence issue by selecting ego-graphs based on their spectral embeddings.
Our spectral greedy graph coreset (SGGC) scales to graphs with millions of nodes, obviates the need for model pre-training, and applies to low-homophily graphs.
arXiv Detail & Related papers (2024-05-27T17:52:12Z) - The Snowflake Hypothesis: Training Deep GNN with One Node One Receptive
field [39.679151680622375]
We introduce the Snowflake Hypothesis -- a novel paradigm underpinning the concept of one node, one receptive field''
We employ the simplest gradient and node-level cosine distance as guiding principles to regulate the aggregation depth for each node.
The observational results demonstrate that our hypothesis can serve as a universal operator for a range of tasks.
arXiv Detail & Related papers (2023-08-19T15:21:12Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - Gradient Gating for Deep Multi-Rate Learning on Graphs [62.25886489571097]
We present Gradient Gating (G$2$), a novel framework for improving the performance of Graph Neural Networks (GNNs)
Our framework is based on gating the output of GNN layers with a mechanism for multi-rate flow of message passing information across nodes of the underlying graph.
arXiv Detail & Related papers (2022-10-02T13:19:48Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - Bag of Tricks for Training Deeper Graph Neural Networks: A Comprehensive
Benchmark Study [100.27567794045045]
Training deep graph neural networks (GNNs) is notoriously hard.
We present the first fair and reproducible benchmark dedicated to assessing the "tricks" of training deep GNNs.
arXiv Detail & Related papers (2021-08-24T05:00:37Z) - Large-scale graph representation learning with very deep GNNs and
self-supervision [17.887767916020774]
We show how to deploy graph neural networks (GNNs) at scale using the Open Graph Benchmark Large-Scale Challenge (OGB-LSC)
Our models achieved an award-level (top-3) performance on both the MAG240M and PCQM4M benchmarks.
arXiv Detail & Related papers (2021-07-20T11:35:25Z) - Deep Graph Neural Networks with Shallow Subgraph Samplers [22.526363992743278]
We propose a simple "deep GNN, shallow sampler" design principle to improve both the GNN accuracy and efficiency.
A properly sampled subgraph may exclude irrelevant or even noisy nodes, and still preserve the critical neighbor features and graph structures.
On the largest public graph dataset, ogbn-papers100M, we achieve state-of-the-art accuracy with an order of magnitude reduction in hardware cost.
arXiv Detail & Related papers (2020-12-02T18:23:48Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.