Pruning and Quantization Impact on Graph Neural Networks
- URL: http://arxiv.org/abs/2510.22058v1
- Date: Fri, 24 Oct 2025 22:44:25 GMT
- Title: Pruning and Quantization Impact on Graph Neural Networks
- Authors: Khatoon Khedri, Reza Rawassizadeh, Qifu Wen, Mehdi Hosseinzadeh,
- Abstract summary: Graph neural networks (GNNs) operate with high accuracy on learning from graph-structured data.<n>Two of the common neural network compression techniques include pruning and quantization.<n>We empirically examine the effects of three pruning methods and three quantization methods on different GNN models.
- Score: 3.3262657112288196
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) are known to operate with high accuracy on learning from graph-structured data, but they suffer from high computational and resource costs. Neural network compression methods are used to reduce the model size while maintaining reasonable accuracy. Two of the common neural network compression techniques include pruning and quantization. In this research, we empirically examine the effects of three pruning methods and three quantization methods on different GNN models, including graph classification tasks, node classification tasks, and link prediction. We conducted all experiments on three graph datasets, including Cora, Proteins, and BBBP. Our findings demonstrate that unstructured fine-grained and global pruning can significantly reduce the model's size(50\%) while maintaining or even improving precision after fine-tuning the pruned model. The evaluation of different quantization methods on GNN shows diverse impacts on accuracy, inference time, and model size across different datasets.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Ensemble Learning for Graph Neural Networks [28.3650473174488]
Graph Neural Networks (GNNs) have shown success in various fields for learning from graph-structured data.
This paper investigates the application of ensemble learning techniques to improve the performance and robustness of GNNs.
arXiv Detail & Related papers (2023-10-22T03:55:13Z) - The effect of data augmentation and 3D-CNN depth on Alzheimer's Disease
detection [51.697248252191265]
This work summarizes and strictly observes best practices regarding data handling, experimental design, and model evaluation.
We focus on Alzheimer's Disease (AD) detection, which serves as a paradigmatic example of challenging problem in healthcare.
Within this framework, we train predictive 15 models, considering three different data augmentation strategies and five distinct 3D CNN architectures.
arXiv Detail & Related papers (2023-09-13T10:40:41Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - What Makes Graph Neural Networks Miscalibrated? [48.00374886504513]
We conduct a systematic study on the calibration qualities of graph neural networks (GNNs)
We identify five factors which influence the calibration of GNNs: general under-confident tendency, diversity of nodewise predictive distributions, distance to training nodes, relative confidence level, and neighborhood similarity.
We design a novel calibration method named Graph Attention Temperature Scaling (GATS), which is tailored for calibrating graph neural networks.
arXiv Detail & Related papers (2022-10-12T16:41:42Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - On Calibration of Graph Neural Networks for Node Classification [29.738179864433445]
Graph neural networks learn entity and edge embeddings for tasks such as node classification and link prediction.
These models achieve good performance with respect to accuracy, but the confidence scores associated with the predictions might not be calibrated.
We propose a topology-aware calibration method that takes the neighboring nodes into account and yields improved calibration.
arXiv Detail & Related papers (2022-06-03T13:48:10Z) - Tackling Oversmoothing of GNNs with Contrastive Learning [35.88575306925201]
Graph neural networks (GNNs) integrate the comprehensive relation of graph data and representation learning capability.
Oversmoothing makes the final representations of nodes indiscriminative, thus deteriorating the node classification and link prediction performance.
We propose the Topology-guided Graph Contrastive Layer, named TGCL, which is the first de-oversmoothing method maintaining all three mentioned metrics.
arXiv Detail & Related papers (2021-10-26T15:56:16Z) - An Introduction to Robust Graph Convolutional Networks [71.68610791161355]
We propose a novel Robust Graph Convolutional Neural Networks for possible erroneous single-view or multi-view data.
By incorporating an extra layers via Autoencoders into traditional graph convolutional networks, we characterize and handle typical error models explicitly.
arXiv Detail & Related papers (2021-03-27T04:47:59Z) - Multivariate Time Series Forecasting with Transfer Entropy Graph [5.179058210068871]
We propose a novel end-to-end deep learning model, termed graph neural network with Neural Granger Causality (CauGNN)
Each variable is regarded as a graph node, and each edge represents the casual relationship between variables.
Three benchmark datasets from the real world are used to evaluate the proposed CauGNN.
arXiv Detail & Related papers (2020-05-03T20:51:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.