Influence Functions for Edge Edits in Non-Convex Graph Neural Networks
- URL: http://arxiv.org/abs/2506.04694v1
- Date: Thu, 05 Jun 2025 07:15:46 GMT
- Title: Influence Functions for Edge Edits in Non-Convex Graph Neural Networks
- Authors: Jaeseung Heo, Kyeongheung Yun, Seokwon Yoon, MoonJeong Park, Jungseul Ok, Dongwoo Kim,
- Abstract summary: We propose a proximal Bregman response function specifically tailored for graph neural networks (GNNs)<n>Our method explicitly accounts for message propagation effects and extends influence prediction to edge deletions and insertions in a principled way.<n>We show that the influence function is versatile in applications such as graph rewiring and adversarial attacks.
- Score: 7.49509518177852
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Understanding how individual edges influence the behavior of graph neural networks (GNNs) is essential for improving their interpretability and robustness. Graph influence functions have emerged as promising tools to efficiently estimate the effects of edge deletions without retraining. However, existing influence prediction methods rely on strict convexity assumptions, exclusively consider the influence of edge deletions while disregarding edge insertions, and fail to capture changes in message propagation caused by these modifications. In this work, we propose a proximal Bregman response function specifically tailored for GNNs, relaxing the convexity requirement and enabling accurate influence prediction for standard neural network architectures. Furthermore, our method explicitly accounts for message propagation effects and extends influence prediction to both edge deletions and insertions in a principled way. Experiments with real-world datasets demonstrate accurate influence predictions for different characteristics of GNNs. We further demonstrate that the influence function is versatile in applications such as graph rewiring and adversarial attacks.
Related papers
- Uncertainty-Aware Graph Neural Networks: A Multi-Hop Evidence Fusion Approach [55.43914153271912]
Graph neural networks (GNNs) excel in graph representation learning by integrating graph structure and node features.<n>Existing GNNs fail to account for the uncertainty of class probabilities that vary with the depth of the model, leading to unreliable and risky predictions in real-world scenarios.<n>We propose a novel Evidence Fusing Graph Neural Network (EFGNN for short) to achieve trustworthy prediction, enhance node classification accuracy, and make explicit the risk of wrong predictions.
arXiv Detail & Related papers (2025-06-16T03:59:38Z) - Statistical Test for Saliency Maps of Graph Neural Networks via Selective Inference [13.628959580589665]
We propose a statistical testing framework to rigorously evaluate the significance of saliency maps.<n>Our main contribution lies in addressing the inflation of the Type I error rate caused by double-dipping of data.<n>Our method provides statistically valid $p$-values while controlling the Type I error rate.
arXiv Detail & Related papers (2025-05-22T16:50:55Z) - A Signed Graph Approach to Understanding and Mitigating Oversmoothing in GNNs [54.62268052283014]
We present a unified theoretical perspective based on the framework of signed graphs.<n>We show that many existing strategies implicitly introduce negative edges that alter message-passing to resist oversmoothing.<n>We propose Structural Balanced Propagation (SBP), a plug-and-play method that assigns signed edges based on either labels or feature similarity.
arXiv Detail & Related papers (2025-02-17T03:25:36Z) - GISExplainer: On Explainability of Graph Neural Networks via Game-theoretic Interaction Subgraphs [21.012180171806456]
GISExplainer is a novel game-theoretic interaction based explanation method.<n>It uncovers what the underlying GNNs have learned for node classification by discovering human-interpretable causal explanatory subgraphs.<n>Extensive experiments demonstrate that GISExplainer achieves better performance than state-of-the-art approaches.
arXiv Detail & Related papers (2024-09-24T03:24:31Z) - Revisiting Edge Perturbation for Graph Neural Network in Graph Data
Augmentation and Attack [58.440711902319855]
Edge perturbation is a method to modify graph structures.
It can be categorized into two veins based on their effects on the performance of graph neural networks (GNNs)
We propose a unified formulation and establish a clear boundary between two categories of edge perturbation methods.
arXiv Detail & Related papers (2024-03-10T15:50:04Z) - Accelerating Scalable Graph Neural Network Inference with Node-Adaptive
Propagation [80.227864832092]
Graph neural networks (GNNs) have exhibited exceptional efficacy in a diverse array of applications.
The sheer size of large-scale graphs presents a significant challenge to real-time inference with GNNs.
We propose an online propagation framework and two novel node-adaptive propagation methods.
arXiv Detail & Related papers (2023-10-17T05:03:00Z) - GIF: A General Graph Unlearning Strategy via Influence Function [63.52038638220563]
Graph Influence Function (GIF) is a model-agnostic unlearning method that can efficiently and accurately estimate parameter changes in response to a $epsilon$-mass perturbation in deleted data.
We conduct extensive experiments on four representative GNN models and three benchmark datasets to justify GIF's superiority in terms of unlearning efficacy, model utility, and unlearning efficiency.
arXiv Detail & Related papers (2023-04-06T03:02:54Z) - Efficient Graph Neural Network Inference at Large Scale [54.89457550773165]
Graph neural networks (GNNs) have demonstrated excellent performance in a wide range of applications.
Existing scalable GNNs leverage linear propagation to preprocess the features and accelerate the training and inference procedure.
We propose a novel adaptive propagation order approach that generates the personalized propagation order for each node based on its topological information.
arXiv Detail & Related papers (2022-11-01T14:38:18Z) - Characterizing the Influence of Graph Elements [24.241010101383505]
The influence function of graph convolution networks (GCNs) can shed light on the effects of removing training nodes/edges from an input graph.
We show that the influence function of an SGC model could be used to estimate the impact of removing training nodes/edges on the test performance of the SGC without re-training the model.
arXiv Detail & Related papers (2022-10-14T01:04:28Z) - Maximizing Influence with Graph Neural Networks [23.896176168370996]
textscGlie is a graph neural network that learns to estimate the influence spread of the independent cascade.
textscGlie relies on a theoretical upper bound that is tightened through supervised training.
We develop a provably submodular influence spread based on textscGlie's representations to rank nodes while building the seed set adaptively.
arXiv Detail & Related papers (2021-08-10T12:08:15Z) - Influence Functions in Deep Learning Are Fragile [52.31375893260445]
influence functions approximate the effect of samples in test-time predictions.
influence estimates are fairly accurate for shallow networks.
Hessian regularization is important to get highquality influence estimates.
arXiv Detail & Related papers (2020-06-25T18:25:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.