Fast and Effective GNN Training with Linearized Random Spanning Trees
- URL: http://arxiv.org/abs/2306.04828v3
- Date: Wed, 14 Feb 2024 16:45:47 GMT
- Title: Fast and Effective GNN Training with Linearized Random Spanning Trees
- Authors: Francesco Bonchi, Claudio Gentile, Francesco Paolo Nerini, Andr\'e
Panisson, Fabio Vitale
- Abstract summary: We present a new effective and scalable framework for training GNNs in node classification tasks.
Our approach progressively refines the GNN weights on an extensive sequence of random spanning trees.
The sparse nature of these path graphs substantially lightens the computational burden of GNN training.
- Score: 20.73637495151938
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a new effective and scalable framework for training GNNs in node
classification tasks, based on the effective resistance, a powerful tool
solidly rooted in graph theory. Our approach progressively refines the GNN
weights on an extensive sequence of random spanning trees, suitably transformed
into path graphs that retain essential topological and node information of the
original graph. The sparse nature of these path graphs substantially lightens
the computational burden of GNN training. This not only enhances scalability
but also effectively addresses common issues like over-squashing,
over-smoothing, and performance deterioration caused by overfitting in small
training set regimes. We carry out an extensive experimental investigation on a
number of real-world graph benchmarks, where we apply our framework to graph
convolutional networks, showing simultaneous improvement of both training speed
and test accuracy over a wide pool of representative baselines.
Related papers
- Faster Inference Time for GNNs using coarsening [1.323700980948722]
coarsening-based methods are used to reduce the graph into a smaller one, resulting in faster computation.
No previous research has tackled the cost during the inference.
This paper presents a novel approach to improve the scalability of GNNs through subgraph-based techniques.
arXiv Detail & Related papers (2024-10-19T06:27:24Z) - Learning to Reweight for Graph Neural Network [63.978102332612906]
Graph Neural Networks (GNNs) show promising results for graph tasks.
Existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data.
We propose a novel nonlinear graph decorrelation method, which can substantially improve the out-of-distribution generalization ability.
arXiv Detail & Related papers (2023-12-19T12:25:10Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - Simple yet Effective Gradient-Free Graph Convolutional Networks [20.448409424929604]
Linearized Graph Neural Networks (GNNs) have attracted great attention in recent years for graph representation learning.
In this paper, we relate over-smoothing with the vanishing gradient phenomenon and craft a gradient-free training framework.
Our methods achieve better and more stable performances on node classification tasks with varying depths and cost much less training time.
arXiv Detail & Related papers (2023-02-01T11:00:24Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z) - Fast Graph Attention Networks Using Effective Resistance Based Graph
Sparsification [70.50751397870972]
FastGAT is a method to make attention based GNNs lightweight by using spectral sparsification to generate an optimal pruning of the input graph.
We experimentally evaluate FastGAT on several large real world graph datasets for node classification tasks.
arXiv Detail & Related papers (2020-06-15T22:07:54Z) - Ripple Walk Training: A Subgraph-based training framework for Large and
Deep Graph Neural Network [10.36962234388739]
We propose a general subgraph-based training framework, namely Ripple Walk Training (RWT), for deep and large graph neural networks.
RWT samples subgraphs from the full graph to constitute a mini-batch, and the full GNN is updated based on the mini-batch gradient.
Extensive experiments on different sizes of graphs demonstrate the effectiveness and efficiency of RWT in training various GNNs.
arXiv Detail & Related papers (2020-02-17T19:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.