Two Heads Are Better Than One: Boosting Graph Sparse Training via
Semantic and Topological Awareness
- URL: http://arxiv.org/abs/2402.01242v1
- Date: Fri, 2 Feb 2024 09:10:35 GMT
- Title: Two Heads Are Better Than One: Boosting Graph Sparse Training via
Semantic and Topological Awareness
- Authors: Guibin Zhang, Yanwei Yue, Kun Wang, Junfeng Fang, Yongduo Sui, Kai
Wang, Yuxuan Liang, Dawei Cheng, Shirui Pan, Tianlong Chen
- Abstract summary: Graph Neural Networks (GNNs) excel in various graph learning tasks but face computational challenges when applied to large-scale graphs.
We propose Graph Sparse Training ( GST), which dynamically manipulates sparsity at the data level.
GST produces a sparse graph with maximum topological integrity and no performance degradation.
- Score: 80.87683145376305
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) excel in various graph learning tasks but face
computational challenges when applied to large-scale graphs. A promising
solution is to remove non-essential edges to reduce the computational overheads
in GNN. Previous literature generally falls into two categories:
topology-guided and semantic-guided. The former maintains certain graph
topological properties yet often underperforms on GNNs due to low integration
with neural network training. The latter performs well at lower sparsity on
GNNs but faces performance collapse at higher sparsity levels. With this in
mind, we take the first step to propose a new research line and concept termed
Graph Sparse Training (GST), which dynamically manipulates sparsity at the data
level. Specifically, GST initially constructs a topology & semantic anchor at a
low training cost, followed by performing dynamic sparse training to align the
sparse graph with the anchor. We introduce the Equilibria Sparsification
Principle to guide this process, effectively balancing the preservation of both
topological and semantic information. Ultimately, GST produces a sparse graph
with maximum topological integrity and no performance degradation. Extensive
experiments on 6 datasets and 5 backbones showcase that GST (I) identifies
subgraphs at higher graph sparsity levels (1.67%~15.85% $\uparrow$) than
state-of-the-art sparsification methods, (II) preserves more key spectral
properties, (III) achieves 1.27-3.42$\times$ speedup in GNN inference and (IV)
successfully helps graph adversarial defense and graph lottery tickets.
Related papers
- Graph Sparsification via Mixture of Graphs [67.40204130771967]
We introduce Mixture-of-Graphs (MoG) to dynamically select tailored pruning solutions for each node.
MoG incorporates multiple sparsifier experts, each characterized by unique sparsity levels and pruning criteria, and selects the appropriate experts for each node.
Experiments on four large-scale OGB datasets and two superpixel datasets, equipped with five GNNs, demonstrate that MoG identifies subgraphs at higher sparsity levels.
arXiv Detail & Related papers (2024-05-23T07:40:21Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Imbalanced Graph Classification via Graph-of-Graph Neural Networks [16.589373163769853]
Graph Neural Networks (GNNs) have achieved unprecedented success in learning graph representations to identify categorical labels of graphs.
We introduce a novel framework, Graph-of-Graph Neural Networks (G$2$GNN), which alleviates the graph imbalance issue by deriving extra supervision globally from neighboring graphs and locally from graphs themselves.
Our proposed G$2$GNN outperforms numerous baselines by roughly 5% in both F1-macro and F1-micro scores.
arXiv Detail & Related papers (2021-12-01T02:25:47Z) - LSP : Acceleration and Regularization of Graph Neural Networks via
Locality Sensitive Pruning of Graphs [2.4250821950628234]
Graph Neural Networks (GNNs) have emerged as highly successful tools for graph-related tasks.
Large graphs often involve many redundant components that can be removed without compromising the performance.
We propose a systematic method called Locality-Sensitive Pruning (LSP) for graph pruning based on Locality-Sensitive Hashing.
arXiv Detail & Related papers (2021-11-10T14:12:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.