Graph Random Neural Network for Semi-Supervised Learning on Graphs
- URL: http://arxiv.org/abs/2005.11079v4
- Date: Tue, 21 Sep 2021 04:09:10 GMT
- Title: Graph Random Neural Network for Semi-Supervised Learning on Graphs
- Authors: Wenzheng Feng, Jie Zhang, Yuxiao Dong, Yu Han, Huanbo Luan, Qian Xu,
Qiang Yang, Evgeny Kharlamov, Jie Tang
- Abstract summary: We study the problem of semi-supervised learning on graphs, for which graph neural networks (GNNs) have been extensively explored.
Most existing GNNs inherently suffer from the limitations of over-smoothing, non-robustness, and weak-generalization when labeled nodes are scarce.
In this paper, we propose a simple yet effective framework -- GRAPH R NEURAL NETWORKS (GRAND) -- to address these issues.
- Score: 36.218650686748546
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of semi-supervised learning on graphs, for which graph
neural networks (GNNs) have been extensively explored. However, most existing
GNNs inherently suffer from the limitations of over-smoothing, non-robustness,
and weak-generalization when labeled nodes are scarce. In this paper, we
propose a simple yet effective framework -- GRAPH RANDOM NEURAL NETWORKS
(GRAND) -- to address these issues. In GRAND, we first design a random
propagation strategy to perform graph data augmentation. Then we leverage
consistency regularization to optimize the prediction consistency of unlabeled
nodes across different data augmentations. Extensive experiments on graph
benchmark datasets suggest that GRAND significantly outperforms
state-of-the-art GNN baselines on semi-supervised node classification. Finally,
we show that GRAND mitigates the issues of over-smoothing and non-robustness,
exhibiting better generalization behavior than existing GNNs. The source code
of GRAND is publicly available at https://github.com/Grand20/grand.
Related papers
- A Manifold Perspective on the Statistical Generalization of Graph Neural Networks [84.01980526069075]
We take a manifold perspective to establish the statistical generalization theory of GNNs on graphs sampled from a manifold in the spectral domain.
We prove that the generalization bounds of GNNs decrease linearly with the size of the graphs in the logarithmic scale, and increase linearly with the spectral continuity constants of the filter functions.
arXiv Detail & Related papers (2024-06-07T19:25:02Z) - Spectral Greedy Coresets for Graph Neural Networks [61.24300262316091]
The ubiquity of large-scale graphs in node-classification tasks hinders the real-world applications of Graph Neural Networks (GNNs)
This paper studies graph coresets for GNNs and avoids the interdependence issue by selecting ego-graphs based on their spectral embeddings.
Our spectral greedy graph coreset (SGGC) scales to graphs with millions of nodes, obviates the need for model pre-training, and applies to low-homophily graphs.
arXiv Detail & Related papers (2024-05-27T17:52:12Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - Geodesic Graph Neural Network for Efficient Graph Representation
Learning [34.047527874184134]
We propose an efficient GNN framework called Geodesic GNN (GDGNN)
It injects conditional relationships between nodes into the model without labeling.
Conditioned on the geodesic representations, GDGNN is able to generate node, link, and graph representations that carry much richer structural information than plain GNNs.
arXiv Detail & Related papers (2022-10-06T02:02:35Z) - GRAND+: Scalable Graph Random Neural Networks [26.47857017550499]
Graph neural networks (GNNs) have been widely adopted for semi-supervised learning on graphs.
It is difficult for GRAND to handle large-scale graphs since its effectiveness relies on computationally expensive data augmentation procedures.
We present a scalable and high-performance GNN framework GRAND+ for semi-supervised graph learning.
arXiv Detail & Related papers (2022-03-12T09:41:23Z) - Imbalanced Graph Classification via Graph-of-Graph Neural Networks [16.589373163769853]
Graph Neural Networks (GNNs) have achieved unprecedented success in learning graph representations to identify categorical labels of graphs.
We introduce a novel framework, Graph-of-Graph Neural Networks (G$2$GNN), which alleviates the graph imbalance issue by deriving extra supervision globally from neighboring graphs and locally from graphs themselves.
Our proposed G$2$GNN outperforms numerous baselines by roughly 5% in both F1-macro and F1-micro scores.
arXiv Detail & Related papers (2021-12-01T02:25:47Z) - A Unified Lottery Ticket Hypothesis for Graph Neural Networks [82.31087406264437]
We present a unified GNN sparsification (UGS) framework that simultaneously prunes the graph adjacency matrix and the model weights.
We further generalize the popular lottery ticket hypothesis to GNNs for the first time, by defining a graph lottery ticket (GLT) as a pair of core sub-dataset and sparse sub-network.
arXiv Detail & Related papers (2021-02-12T21:52:43Z) - Learning to Drop: Robust Graph Neural Network via Topological Denoising [50.81722989898142]
We propose PTDNet, a parameterized topological denoising network, to improve the robustness and generalization performance of Graph Neural Networks (GNNs)
PTDNet prunes task-irrelevant edges by penalizing the number of edges in the sparsified graph with parameterized networks.
We show that PTDNet can improve the performance of GNNs significantly and the performance gain becomes larger for more noisy datasets.
arXiv Detail & Related papers (2020-11-13T18:53:21Z) - GPT-GNN: Generative Pre-Training of Graph Neural Networks [93.35945182085948]
Graph neural networks (GNNs) have been demonstrated to be powerful in modeling graph-structured data.
We present the GPT-GNN framework to initialize GNNs by generative pre-training.
We show that GPT-GNN significantly outperforms state-of-the-art GNN models without pre-training by up to 9.1% across various downstream tasks.
arXiv Detail & Related papers (2020-06-27T20:12:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.