Shapley-Value-Based Graph Sparsification for GNN Inference
- URL: http://arxiv.org/abs/2507.20460v1
- Date: Mon, 28 Jul 2025 01:30:09 GMT
- Title: Shapley-Value-Based Graph Sparsification for GNN Inference
- Authors: Selahattin Akkas, Ariful Azad,
- Abstract summary: Graph sparsification is a technique for improving inference efficiency in Graph Neural Networks.<n> Shapley value based methods assign both positive and negative contributions to node predictions.<n>Our approach shows that Shapley value-based graph sparsification maintains predictive performance while significantly reducing graph complexity.
- Score: 1.5998912722142724
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph sparsification is a key technique for improving inference efficiency in Graph Neural Networks by removing edges with minimal impact on predictions. GNN explainability methods generate local importance scores, which can be aggregated into global scores for graph sparsification. However, many explainability methods produce only non-negative scores, limiting their applicability for sparsification. In contrast, Shapley value based methods assign both positive and negative contributions to node predictions, offering a theoretically robust and fair allocation of importance by evaluating many subsets of graphs. Unlike gradient-based or perturbation-based explainers, Shapley values enable better pruning strategies that preserve influential edges while removing misleading or adversarial connections. Our approach shows that Shapley value-based graph sparsification maintains predictive performance while significantly reducing graph complexity, enhancing both interpretability and efficiency in GNN inference.
Related papers
- Shapley-Guided Utility Learning for Effective Graph Inference Data Valuation [6.542796128290513]
We propose Shapley-Guided Utility Learning (SGUL), a novel framework for graph inference data valuation.<n>SGUL combines transferable data-specific and modelspecific features to approximate test accuracy without relying on ground truth labels.<n>We show that SGUL consistently outperforms existing baselines in both inductive and transductive settings.
arXiv Detail & Related papers (2025-03-23T20:35:03Z) - A Signed Graph Approach to Understanding and Mitigating Oversmoothing in GNNs [54.62268052283014]
We present a unified theoretical perspective based on the framework of signed graphs.<n>We show that many existing strategies implicitly introduce negative edges that alter message-passing to resist oversmoothing.<n>We propose Structural Balanced Propagation (SBP), a plug-and-play method that assigns signed edges based on either labels or feature similarity.
arXiv Detail & Related papers (2025-02-17T03:25:36Z) - Improving the interpretability of GNN predictions through conformal-based graph sparsification [9.550589670316523]
Graph Neural Networks (GNNs) have achieved state-of-the-art performance in solving graph classification tasks.
We propose a GNN emphtraining approach that finds the most predictive subgraph by removing edges and/or nodes.
We rely on reinforcement learning to solve the resulting bi-level optimization with a reward function based on conformal predictions.
arXiv Detail & Related papers (2024-04-18T17:34:47Z) - Learning to Reweight for Graph Neural Network [63.978102332612906]
Graph Neural Networks (GNNs) show promising results for graph tasks.
Existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data.
We propose a novel nonlinear graph decorrelation method, which can substantially improve the out-of-distribution generalization ability.
arXiv Detail & Related papers (2023-12-19T12:25:10Z) - Rethinking Explaining Graph Neural Networks via Non-parametric Subgraph
Matching [68.35685422301613]
We propose a novel non-parametric subgraph matching framework, dubbed MatchExplainer, to explore explanatory subgraphs.
It couples the target graph with other counterpart instances and identifies the most crucial joint substructure by minimizing the node corresponding-based distance.
Experiments on synthetic and real-world datasets show the effectiveness of our MatchExplainer by outperforming all state-of-the-art parametric baselines with significant margins.
arXiv Detail & Related papers (2023-01-07T05:14:45Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Implicit vs Unfolded Graph Neural Networks [29.803948965931212]
We show that implicit and unfolded GNNs can achieve strong node classification accuracy across disparate regimes.<n>While IGNN is substantially more memory-efficient, UGNN models support unique, integrated graph attention mechanisms and propagation rules.
arXiv Detail & Related papers (2021-11-12T07:49:16Z) - Structure-Aware Hard Negative Mining for Heterogeneous Graph Contrastive
Learning [21.702342154458623]
This work investigates Contrastive Learning (CL) on Graph Neural Networks (GNNs)
We first generate multiple semantic views according to metapaths and network schemas.
We then push node embeddings corresponding to different semantic views close to each other (positives) and pulling other embeddings apart (negatives)
Considering the complex graph structure and the smoothing nature of GNNs, we propose a structure-aware hard negative mining scheme.
arXiv Detail & Related papers (2021-08-31T14:44:49Z) - Scaling Up Graph Neural Networks Via Graph Coarsening [18.176326897605225]
Scalability of graph neural networks (GNNs) is one of the major challenges in machine learning.
In this paper, we propose to use graph coarsening for scalable training of GNNs.
We show that, simply applying off-the-shelf coarsening methods, we can reduce the number of nodes by up to a factor of ten without causing a noticeable downgrade in classification accuracy.
arXiv Detail & Related papers (2021-06-09T15:46:17Z) - Learning to Drop: Robust Graph Neural Network via Topological Denoising [50.81722989898142]
We propose PTDNet, a parameterized topological denoising network, to improve the robustness and generalization performance of Graph Neural Networks (GNNs)
PTDNet prunes task-irrelevant edges by penalizing the number of edges in the sparsified graph with parameterized networks.
We show that PTDNet can improve the performance of GNNs significantly and the performance gain becomes larger for more noisy datasets.
arXiv Detail & Related papers (2020-11-13T18:53:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.