One-Shot Multi-Rate Pruning of Graph Convolutional Networks
- URL: http://arxiv.org/abs/2312.17615v1
- Date: Fri, 29 Dec 2023 14:20:00 GMT
- Title: One-Shot Multi-Rate Pruning of Graph Convolutional Networks
- Authors: Hichem Sahbi
- Abstract summary: We devise a novel lightweight Graph Convolutional Network (GCN) design dubbed as Multi-Rate Magnitude Pruning (MRMP)
Our method is variational and proceeds by aligning the weight distribution of the learned networks with an a priori distribution.
In the other hand, MRMP achieves a joint training of multiple GCNs, on top of shared weights, in order to extrapolate accurate networks at any targeted pruning rate without retraining their weights.
- Score: 5.656581242851759
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we devise a novel lightweight Graph Convolutional Network
(GCN) design dubbed as Multi-Rate Magnitude Pruning (MRMP) that jointly trains
network topology and weights. Our method is variational and proceeds by
aligning the weight distribution of the learned networks with an a priori
distribution. In the one hand, this allows implementing any fixed pruning rate,
and also enhancing the generalization performances of the designed lightweight
GCNs. In the other hand, MRMP achieves a joint training of multiple GCNs, on
top of shared weights, in order to extrapolate accurate networks at any
targeted pruning rate without retraining their weights. Extensive experiments
conducted on the challenging task of skeleton-based recognition show a
substantial gain of our lightweight GCNs particularly at very high pruning
regimes.
Related papers
- T-GAE: Transferable Graph Autoencoder for Network Alignment [79.89704126746204]
T-GAE is a graph autoencoder framework that leverages transferability and stability of GNNs to achieve efficient network alignment without retraining.
Our experiments demonstrate that T-GAE outperforms the state-of-the-art optimization method and the best GNN approach by up to 38.7% and 50.8%, respectively.
arXiv Detail & Related papers (2023-10-05T02:58:29Z) - Budget-Aware Graph Convolutional Network Design using Probabilistic
Magnitude Pruning [12.18340575383456]
We devise a novel lightweight Graph convolutional networks (GCNs) design dubbed as Probabilistic Magnitude Pruning (PMP)
Our method is variational and proceeds by aligning the weight distribution of the learned networks with a priori distribution.
Experiments conducted on the challenging task of skeleton-based recognition show a substantial gain of our lightweight GCNs.
arXiv Detail & Related papers (2023-05-30T18:12:13Z) - Training Lightweight Graph Convolutional Networks with Phase-field
Models [12.18340575383456]
We design lightweight graph convolutional networks (GCNs) using a particular class of regularizers, dubbed as phase-field models (PFMs)
PFMs exhibit a bi-phase behavior using a particular ultra-local term that allows training both the topology and the weight parameters of GCNs as a part of a single "end-to-end" optimization problem.
arXiv Detail & Related papers (2022-12-19T12:49:03Z) - A Comprehensive Survey on Distributed Training of Graph Neural Networks [59.785830738482474]
Graph neural networks (GNNs) have been demonstrated to be a powerful algorithmic model in broad application fields.
To scale GNN training up for large-scale and ever-growing graphs, the most promising solution is distributed training.
The volume of related research on distributed GNN training is exceptionally vast, accompanied by an extraordinarily rapid pace of publication.
arXiv Detail & Related papers (2022-11-10T06:22:12Z) - Lightweight Graph Convolutional Networks with Topologically Consistent
Magnitude Pruning [12.18340575383456]
Graph convolution networks (GCNs) are currently mainstream in learning with irregular data.
In this paper, we devise a novel method for lightweight GCN design.
Our proposed approach parses and selectsworks with the highest magnitudes while guaranteeing their topological consistency.
arXiv Detail & Related papers (2022-03-25T12:34:11Z) - RawlsGCN: Towards Rawlsian Difference Principle on Graph Convolutional
Network [102.27090022283208]
Graph Convolutional Network (GCN) plays pivotal roles in many real-world applications.
GCN often exhibits performance disparity with respect to node degrees, resulting in worse predictive accuracy for low-degree nodes.
We formulate the problem of mitigating the degree-related performance disparity in GCN from the perspective of the Rawlsian difference principle.
arXiv Detail & Related papers (2022-02-28T05:07:57Z) - Distributed Optimization of Graph Convolutional Network using Subgraph
Variance [8.510726499008204]
We propose a Graph Augmentation based Distributed GCN framework(GAD)
GAD has two main components, GAD-Partition and GAD-r.
Our framework significantly reduces the communication overhead 50%, improves the convergence speed (2X) and slight gain in accuracy (0.45%) based on minimal redundancy compared to the state-of-the-art methods.
arXiv Detail & Related papers (2021-10-06T18:01:47Z) - Spatio-Temporal Inception Graph Convolutional Networks for
Skeleton-Based Action Recognition [126.51241919472356]
We design a simple and highly modularized graph convolutional network architecture for skeleton-based action recognition.
Our network is constructed by repeating a building block that aggregates multi-granularity information from both the spatial and temporal paths.
arXiv Detail & Related papers (2020-11-26T14:43:04Z) - Geometric Scattering Attention Networks [14.558882688159297]
We introduce a new attention-based architecture to produce adaptive task-driven node representations.
We show the resulting geometric scattering attention network (GSAN) outperforms previous networks in semi-supervised node classification.
arXiv Detail & Related papers (2020-10-28T14:36:40Z) - Improve Generalization and Robustness of Neural Networks via Weight
Scale Shifting Invariant Regularizations [52.493315075385325]
We show that a family of regularizers, including weight decay, is ineffective at penalizing the intrinsic norms of weights for networks with homogeneous activation functions.
We propose an improved regularizer that is invariant to weight scale shifting and thus effectively constrains the intrinsic norm of a neural network.
arXiv Detail & Related papers (2020-08-07T02:55:28Z) - Fitting the Search Space of Weight-sharing NAS with Graph Convolutional
Networks [100.14670789581811]
We train a graph convolutional network to fit the performance of sampled sub-networks.
With this strategy, we achieve a higher rank correlation coefficient in the selected set of candidates.
arXiv Detail & Related papers (2020-04-17T19:12:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.