RGP: Neural Network Pruning through Its Regular Graph Structure
- URL: http://arxiv.org/abs/2110.15192v1
- Date: Thu, 28 Oct 2021 15:08:32 GMT
- Title: RGP: Neural Network Pruning through Its Regular Graph Structure
- Authors: Zhuangzhi Chen, Jingyang Xiang, Yao Lu, Qi Xuan
- Abstract summary: We study the graph structure of the neural network, and propose regular graph based pruning (RGP) to perform a one-shot neural network pruning.
Experiments show that the average shortest path length of the graph is negatively correlated with the classification accuracy of the corresponding neural network.
- Score: 6.0686251332936365
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lightweight model design has become an important direction in the application
of deep learning technology, pruning is an effective mean to achieve a large
reduction in model parameters and FLOPs. The existing neural network pruning
methods mostly start from the importance of parameters, and design parameter
evaluation metrics to perform parameter pruning iteratively. These methods are
not studied from the perspective of model topology, may be effective but not
efficient, and requires completely different pruning for different datasets. In
this paper, we study the graph structure of the neural network, and propose
regular graph based pruning (RGP) to perform a one-shot neural network pruning.
We generate a regular graph, set the node degree value of the graph to meet the
pruning ratio, and reduce the average shortest path length of the graph by
swapping the edges to obtain the optimal edge distribution. Finally, the
obtained graph is mapped into a neural network structure to realize pruning.
Experiments show that the average shortest path length of the graph is
negatively correlated with the classification accuracy of the corresponding
neural network, and the proposed RGP shows a strong precision retention
capability with extremely high parameter reduction (more than 90%) and FLOPs
reduction (more than 90%).
Related papers
- Sparse Decomposition of Graph Neural Networks [20.768412002413843]
We propose an approach to reduce the number of nodes that are included during aggregation.
We achieve this through a sparse decomposition, learning to approximate node representations using a weighted sum of linearly transformed features.
We demonstrate via extensive experiments that our method outperforms other baselines designed for inference speedup.
arXiv Detail & Related papers (2024-10-25T17:52:16Z) - Spatiotemporal Forecasting Meets Efficiency: Causal Graph Process Neural Networks [5.703629317205571]
Causal Graph Graph Processes (CGPs) offer an alternative, using graph filters instead of relational field layers to reduce parameters and minimize memory consumption.
This paper introduces a non-linear model combining CGPs and GNNs fortemporal forecasting. CGProNet employs higher-order graph filters, optimizing the model with fewer parameters, reducing memory usage, and improving runtime efficiency.
arXiv Detail & Related papers (2024-05-29T08:37:48Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Relation Embedding based Graph Neural Networks for Handling
Heterogeneous Graph [58.99478502486377]
We propose a simple yet efficient framework to make the homogeneous GNNs have adequate ability to handle heterogeneous graphs.
Specifically, we propose Relation Embedding based Graph Neural Networks (RE-GNNs), which employ only one parameter per relation to embed the importance of edge type relations and self-loop connections.
arXiv Detail & Related papers (2022-09-23T05:24:18Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Scaling Up Graph Neural Networks Via Graph Coarsening [18.176326897605225]
Scalability of graph neural networks (GNNs) is one of the major challenges in machine learning.
In this paper, we propose to use graph coarsening for scalable training of GNNs.
We show that, simply applying off-the-shelf coarsening methods, we can reduce the number of nodes by up to a factor of ten without causing a noticeable downgrade in classification accuracy.
arXiv Detail & Related papers (2021-06-09T15:46:17Z) - Learning to Drop: Robust Graph Neural Network via Topological Denoising [50.81722989898142]
We propose PTDNet, a parameterized topological denoising network, to improve the robustness and generalization performance of Graph Neural Networks (GNNs)
PTDNet prunes task-irrelevant edges by penalizing the number of edges in the sparsified graph with parameterized networks.
We show that PTDNet can improve the performance of GNNs significantly and the performance gain becomes larger for more noisy datasets.
arXiv Detail & Related papers (2020-11-13T18:53:21Z) - RicciNets: Curvature-guided Pruning of High-performance Neural Networks
Using Ricci Flow [0.0]
We use the definition of Ricci curvature to remove edges of low importance before mapping the computational graph to a neural network.
We show a reduction of almost $35%$ in the number of floating-point operations (FLOPs) per pass, with no degradation in performance.
arXiv Detail & Related papers (2020-07-08T15:56:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.