Towards Sparsification of Graph Neural Networks
- URL: http://arxiv.org/abs/2209.04766v1
- Date: Sun, 11 Sep 2022 01:39:29 GMT
- Title: Towards Sparsification of Graph Neural Networks
- Authors: Hongwu Peng, Deniz Gurevin, Shaoyi Huang, Tong Geng, Weiwen Jiang,
Omer Khan, and Caiwen Ding
- Abstract summary: We use two state-of-the-art model compression methods to train and prune and sparse training for the sparsification of weight layers in GNNs.
We evaluate and compare the efficiency of both methods in terms of accuracy, training sparsity, and training FLOPs on real-world graphs.
- Score: 9.568566305616656
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As real-world graphs expand in size, larger GNN models with billions of
parameters are deployed. High parameter count in such models makes training and
inference on graphs expensive and challenging. To reduce the computational and
memory costs of GNNs, optimization methods such as pruning the redundant nodes
and edges in input graphs have been commonly adopted. However, model
compression, which directly targets the sparsification of model layers, has
been mostly limited to traditional Deep Neural Networks (DNNs) used for tasks
such as image classification and object detection. In this paper, we utilize
two state-of-the-art model compression methods (1) train and prune and (2)
sparse training for the sparsification of weight layers in GNNs. We evaluate
and compare the efficiency of both methods in terms of accuracy, training
sparsity, and training FLOPs on real-world graphs. Our experimental results
show that on the ia-email, wiki-talk, and stackoverflow datasets for link
prediction, sparse training with much lower training FLOPs achieves a
comparable accuracy with the train and prune method. On the brain dataset for
node classification, sparse training uses a lower number FLOPs (less than 1/7
FLOPs of train and prune method) and preserves a much better accuracy
performance under extreme model sparsity.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.