Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks
- URL: http://arxiv.org/abs/2207.08629v2
- Date: Tue, 19 Jul 2022 02:53:45 GMT
- Title: Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks
- Authors: Chuang Liu, Xueqi Ma, Yibing Zhan, Liang Ding, Dapeng Tao, Bo Du,
Wenbin Hu, Danilo Mandic
- Abstract summary: We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
- Score: 52.566735716983956
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) tend to suffer from high computation costs due
to the exponentially increasing scale of graph data and the number of model
parameters, which restricts their utility in practical applications. To this
end, some recent works focus on sparsifying GNNs with the lottery ticket
hypothesis (LTH) to reduce inference costs while maintaining performance
levels. However, the LTH-based methods suffer from two major drawbacks: 1) they
require exhaustive and iterative training of dense models, resulting in an
extremely large training computation cost, and 2) they only trim graph
structures and model parameters but ignore the node feature dimension, where
significant redundancy exists. To overcome the above limitations, we propose a
comprehensive graph gradual pruning framework termed CGP. This is achieved by
designing a during-training graph pruning paradigm to dynamically prune GNNs
within one training process. Unlike LTH-based methods, the proposed CGP
approach requires no re-training, which significantly reduces the computation
costs. Furthermore, we design a co-sparsifying strategy to comprehensively trim
all three core elements of GNNs: graph structures, node features, and model
parameters. Meanwhile, aiming at refining the pruning operation, we introduce a
regrowth process into our CGP framework, in order to re-establish the pruned
but important connections. The proposed CGP is evaluated by using a node
classification task across 6 GNN architectures, including shallow models (GCN
and GAT), shallow-but-deep-propagation models (SGC and APPNP), and deep models
(GCNII and ResGCN), on a total of 14 real-world graph datasets, including
large-scale graph datasets from the challenging Open Graph Benchmark.
Experiments reveal that our proposed strategy greatly improves both training
and inference efficiency while matching or even exceeding the accuracy of
existing methods.
Related papers
- Faster Inference Time for GNNs using coarsening [1.323700980948722]
coarsening-based methods are used to reduce the graph into a smaller one, resulting in faster computation.
No previous research has tackled the cost during the inference.
This paper presents a novel approach to improve the scalability of GNNs through subgraph-based techniques.
arXiv Detail & Related papers (2024-10-19T06:27:24Z) - Spectral Greedy Coresets for Graph Neural Networks [61.24300262316091]
The ubiquity of large-scale graphs in node-classification tasks hinders the real-world applications of Graph Neural Networks (GNNs)
This paper studies graph coresets for GNNs and avoids the interdependence issue by selecting ego-graphs based on their spectral embeddings.
Our spectral greedy graph coreset (SGGC) scales to graphs with millions of nodes, obviates the need for model pre-training, and applies to low-homophily graphs.
arXiv Detail & Related papers (2024-05-27T17:52:12Z) - Two Heads Are Better Than One: Boosting Graph Sparse Training via
Semantic and Topological Awareness [80.87683145376305]
Graph Neural Networks (GNNs) excel in various graph learning tasks but face computational challenges when applied to large-scale graphs.
We propose Graph Sparse Training ( GST), which dynamically manipulates sparsity at the data level.
GST produces a sparse graph with maximum topological integrity and no performance degradation.
arXiv Detail & Related papers (2024-02-02T09:10:35Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - Fast Graph Condensation with Structure-based Neural Tangent Kernel [30.098666399404287]
We propose a novel dataset condensation framework (GC-SNTK) for graph-structured data.
A Structure-based Neural Tangent Kernel (SNTK) is developed to capture the topology of graph and serves as the kernel function in KRR paradigm.
Experiments demonstrate the effectiveness of our proposed model in accelerating graph condensation while maintaining high prediction performance.
arXiv Detail & Related papers (2023-10-17T07:25:59Z) - Fast and Effective GNN Training with Linearized Random Spanning Trees [20.73637495151938]
We present a new effective and scalable framework for training GNNs in node classification tasks.
Our approach progressively refines the GNN weights on an extensive sequence of random spanning trees.
The sparse nature of these path graphs substantially lightens the computational burden of GNN training.
arXiv Detail & Related papers (2023-06-07T23:12:42Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - Learning to Drop: Robust Graph Neural Network via Topological Denoising [50.81722989898142]
We propose PTDNet, a parameterized topological denoising network, to improve the robustness and generalization performance of Graph Neural Networks (GNNs)
PTDNet prunes task-irrelevant edges by penalizing the number of edges in the sparsified graph with parameterized networks.
We show that PTDNet can improve the performance of GNNs significantly and the performance gain becomes larger for more noisy datasets.
arXiv Detail & Related papers (2020-11-13T18:53:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.