Combining Label Propagation and Simple Models Out-performs Graph Neural
Networks
- URL: http://arxiv.org/abs/2010.13993v2
- Date: Mon, 2 Nov 2020 19:42:34 GMT
- Title: Combining Label Propagation and Simple Models Out-performs Graph Neural
Networks
- Authors: Qian Huang, Horace He, Abhay Singh, Ser-Nam Lim, Austin R. Benson
- Abstract summary: We show that for many standard transductive node classification benchmarks, we can exceed or match the performance of state-of-the-art GNNs.
We call this overall procedure Correct and Smooth (C&S)
Our approach exceeds or nearly matches the performance of state-of-the-art GNNs on a wide variety of benchmarks.
- Score: 52.121819834353865
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) are the predominant technique for learning over
graphs. However, there is relatively little understanding of why GNNs are
successful in practice and whether they are necessary for good performance.
Here, we show that for many standard transductive node classification
benchmarks, we can exceed or match the performance of state-of-the-art GNNs by
combining shallow models that ignore the graph structure with two simple
post-processing steps that exploit correlation in the label structure: (i) an
"error correlation" that spreads residual errors in training data to correct
errors in test data and (ii) a "prediction correlation" that smooths the
predictions on the test data. We call this overall procedure Correct and Smooth
(C&S), and the post-processing steps are implemented via simple modifications
to standard label propagation techniques from early graph-based semi-supervised
learning methods. Our approach exceeds or nearly matches the performance of
state-of-the-art GNNs on a wide variety of benchmarks, with just a small
fraction of the parameters and orders of magnitude faster runtime. For
instance, we exceed the best known GNN performance on the OGB-Products dataset
with 137 times fewer parameters and greater than 100 times less training time.
The performance of our methods highlights how directly incorporating label
information into the learning algorithm (as was done in traditional techniques)
yields easy and substantial performance gains. We can also incorporate our
techniques into big GNN models, providing modest gains. Our code for the OGB
results is at https://github.com/Chillee/CorrectAndSmooth.
Related papers
- Learning to Reweight for Graph Neural Network [63.978102332612906]
Graph Neural Networks (GNNs) show promising results for graph tasks.
Existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data.
We propose a novel nonlinear graph decorrelation method, which can substantially improve the out-of-distribution generalization ability.
arXiv Detail & Related papers (2023-12-19T12:25:10Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - Certified Graph Unlearning [39.29148804411811]
Graph-structured data is ubiquitous in practice and often processed using graph neural networks (GNNs)
We introduce the first known framework for emph certified graph unlearning of GNNs.
Three different types of unlearning requests need to be considered, including node feature, edge and node unlearning.
arXiv Detail & Related papers (2022-06-18T07:41:10Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Training Free Graph Neural Networks for Graph Matching [103.45755859119035]
TFGM is a framework to boost the performance of Graph Neural Networks (GNNs) based graph matching without training.
Applying TFGM on various GNNs shows promising improvements over baselines.
arXiv Detail & Related papers (2022-01-14T09:04:46Z) - Adaptive Kernel Graph Neural Network [21.863238974404474]
Graph neural networks (GNNs) have demonstrated great success in representation learning for graph-structured data.
In this paper, we propose a novel framework - i.e., namely Adaptive Kernel Graph Neural Network (AKGNN)
AKGNN learns to adapt to the optimal graph kernel in a unified manner at the first attempt.
Experiments are conducted on acknowledged benchmark datasets and promising results demonstrate the outstanding performance of our proposed AKGNN.
arXiv Detail & Related papers (2021-12-08T20:23:58Z) - SAS: A Simple, Accurate and Scalable Node Classification Algorithm [7.592727516433364]
Graph neural networks have achieved state-of-the-art accuracy for graph node classification.
GNNs are difficult to scale to large graphs, for example frequently encountering out-of-memory errors on even moderate size graphs.
Recent works have sought to address this problem using a two-stage approach.
arXiv Detail & Related papers (2021-04-19T08:17:35Z) - Scalable Graph Neural Networks for Heterogeneous Graphs [12.44278942365518]
Graph neural networks (GNNs) are a popular class of parametric model for learning over graph-structured data.
Recent work has argued that GNNs primarily use the graph for feature smoothing, and have shown competitive results on benchmark tasks.
In this work, we ask whether these results can be extended to heterogeneous graphs, which encode multiple types of relationship between different entities.
arXiv Detail & Related papers (2020-11-19T06:03:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.