Towards Sparsification of Graph Neural Networks
- URL: http://arxiv.org/abs/2209.04766v1
- Date: Sun, 11 Sep 2022 01:39:29 GMT
- Title: Towards Sparsification of Graph Neural Networks
- Authors: Hongwu Peng, Deniz Gurevin, Shaoyi Huang, Tong Geng, Weiwen Jiang,
Omer Khan, and Caiwen Ding
- Abstract summary: We use two state-of-the-art model compression methods to train and prune and sparse training for the sparsification of weight layers in GNNs.
We evaluate and compare the efficiency of both methods in terms of accuracy, training sparsity, and training FLOPs on real-world graphs.
- Score: 9.568566305616656
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As real-world graphs expand in size, larger GNN models with billions of
parameters are deployed. High parameter count in such models makes training and
inference on graphs expensive and challenging. To reduce the computational and
memory costs of GNNs, optimization methods such as pruning the redundant nodes
and edges in input graphs have been commonly adopted. However, model
compression, which directly targets the sparsification of model layers, has
been mostly limited to traditional Deep Neural Networks (DNNs) used for tasks
such as image classification and object detection. In this paper, we utilize
two state-of-the-art model compression methods (1) train and prune and (2)
sparse training for the sparsification of weight layers in GNNs. We evaluate
and compare the efficiency of both methods in terms of accuracy, training
sparsity, and training FLOPs on real-world graphs. Our experimental results
show that on the ia-email, wiki-talk, and stackoverflow datasets for link
prediction, sparse training with much lower training FLOPs achieves a
comparable accuracy with the train and prune method. On the brain dataset for
node classification, sparse training uses a lower number FLOPs (less than 1/7
FLOPs of train and prune method) and preserves a much better accuracy
performance under extreme model sparsity.
Related papers
- Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - Unlearning Graph Classifiers with Limited Data Resources [39.29148804411811]
Controlled data removal is becoming an important feature of machine learning models for data-sensitive Web applications.
It is still largely unknown how to perform efficient machine unlearning of graph neural networks (GNNs)
Our main contribution is the first known nonlinear approximate graph unlearning method based on GSTs.
Our second contribution is a theoretical analysis of the computational complexity of the proposed unlearning mechanism.
Our third contribution are extensive simulation results which show that, compared to complete retraining of GNNs after each removal request, the new GST-based approach offers, on average, a 10.38x speed-up
arXiv Detail & Related papers (2022-11-06T20:46:50Z) - Rethinking Efficiency and Redundancy in Training Large-scale Graphs [26.982614602436655]
We argue that redundancy exists in large-scale graphs and will degrade the training efficiency.
Despite recent advances in sampling-based training methods, sampling-based GNNs generally overlook the redundancy issue.
We propose DropReef to detect and drop the redundancy in large-scale graphs once and for all.
arXiv Detail & Related papers (2022-09-02T03:25:32Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Effective Model Sparsification by Scheduled Grow-and-Prune Methods [73.03533268740605]
We propose a novel scheduled grow-and-prune (GaP) methodology without pre-training the dense models.
Experiments have shown that such models can match or beat the quality of highly optimized dense models at 80% sparsity on a variety of tasks.
arXiv Detail & Related papers (2021-06-18T01:03:13Z) - Revisiting Graph based Collaborative Filtering: A Linear Residual Graph
Convolutional Network Approach [55.44107800525776]
Graph Convolutional Networks (GCNs) are state-of-the-art graph based representation learning models.
In this paper, we revisit GCN based Collaborative Filtering (CF) based Recommender Systems (RS)
We show that removing non-linearities would enhance recommendation performance, consistent with the theories in simple graph convolutional networks.
We propose a residual network structure that is specifically designed for CF with user-item interaction modeling.
arXiv Detail & Related papers (2020-01-28T04:41:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.