Statistical Test for Saliency Maps of Graph Neural Networks via Selective Inference
- URL: http://arxiv.org/abs/2505.16893v1
- Date: Thu, 22 May 2025 16:50:55 GMT
- Title: Statistical Test for Saliency Maps of Graph Neural Networks via Selective Inference
- Authors: Shuichi Nishino, Tomohiro Shiraishi, Teruyuki Katsuoka, Ichiro Takeuchi,
- Abstract summary: We propose a statistical testing framework to rigorously evaluate the significance of saliency maps.<n>Our main contribution lies in addressing the inflation of the Type I error rate caused by double-dipping of data.<n>Our method provides statistically valid $p$-values while controlling the Type I error rate.
- Score: 13.628959580589665
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have gained prominence for their ability to process graph-structured data across various domains. However, interpreting GNN decisions remains a significant challenge, leading to the adoption of saliency maps for identifying influential nodes and edges. Despite their utility, the reliability of GNN saliency maps has been questioned, particularly in terms of their robustness to noise. In this study, we propose a statistical testing framework to rigorously evaluate the significance of saliency maps. Our main contribution lies in addressing the inflation of the Type I error rate caused by double-dipping of data, leveraging the framework of Selective Inference. Our method provides statistically valid $p$-values while controlling the Type I error rate, ensuring that identified salient subgraphs contain meaningful information rather than random artifacts. To demonstrate the effectiveness of our method, we conduct experiments on both synthetic and real-world datasets, showing its effectiveness in assessing the reliability of GNN interpretations.
Related papers
- On the Stability of Graph Convolutional Neural Networks: A Probabilistic Perspective [24.98112303106984]
We study how perturbations in the graph topology affect GCNN outputs and propose a novel formulation for analyzing model stability.<n>Unlike prior studies that focus only on worst-case perturbations, our distribution-aware formulation characterizes output perturbations across a broad range of input data.
arXiv Detail & Related papers (2025-06-01T23:17:19Z) - On the Relationship Between Robustness and Expressivity of Graph Neural Networks [7.161966906570077]
Graph Neural Networks (GNNs) are vulnerable to bit-flip attacks (BFAs)<n>We introduce an analytical framework to study the influence of architectural features, graph properties, and their interaction.<n>We derive theoretical bounds for the number of bit flips required to degrade GNN expressivity on a dataset.
arXiv Detail & Related papers (2025-04-18T16:38:33Z) - BetaExplainer: A Probabilistic Method to Explain Graph Neural Networks [1.798554018133928]
Graph neural networks (GNNs) are powerful tools for conducting inference on graph data.<n>Many interpretable GNN methods exist, but they cannot quantify uncertainty in edge weights.<n>We proposed BetaExplainer which addresses these issues by using a sparsity-inducing prior to mask unimportant edges.
arXiv Detail & Related papers (2024-12-16T16:45:26Z) - Conditional Uncertainty Quantification for Tensorized Topological Neural Networks [19.560300212956747]
Graph Neural Networks (GNNs) have become the de facto standard for analyzing graph-structured data.
Recent studies have raised concerns about the statistical reliability of uncertainty estimates produced by GNNs.
This paper introduces a novel technique for quantifying uncertainty in non-exchangeable graph-structured data.
arXiv Detail & Related papers (2024-10-20T01:03:40Z) - DFA-GNN: Forward Learning of Graph Neural Networks by Direct Feedback Alignment [57.62885438406724]
Graph neural networks are recognized for their strong performance across various applications.
BP has limitations that challenge its biological plausibility and affect the efficiency, scalability and parallelism of training neural networks for graph-based tasks.
We propose DFA-GNN, a novel forward learning framework tailored for GNNs with a case study of semi-supervised learning.
arXiv Detail & Related papers (2024-06-04T07:24:51Z) - Conditional Shift-Robust Conformal Prediction for Graph Neural Network [0.0]
Graph Neural Networks (GNNs) have emerged as potent tools for predicting outcomes in graph-structured data.<n>Despite their efficacy, GNNs have limited ability to provide robust uncertainty estimates.<n>We propose Conditional Shift Robust (CondSR) conformal prediction for GNNs.
arXiv Detail & Related papers (2024-05-20T11:47:31Z) - Online GNN Evaluation Under Test-time Graph Distribution Shifts [92.4376834462224]
A new research problem, online GNN evaluation, aims to provide valuable insights into the well-trained GNNs's ability to generalize to real-world unlabeled graphs.
We develop an effective learning behavior discrepancy score, dubbed LeBeD, to estimate the test-time generalization errors of well-trained GNN models.
arXiv Detail & Related papers (2024-03-15T01:28:08Z) - Uncertainty in Graph Neural Networks: A Survey [47.785948021510535]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.<n>However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.<n>This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - Towards Robust Fidelity for Evaluating Explainability of Graph Neural Networks [32.345435955298825]
Graph Neural Networks (GNNs) are neural models that leverage the dependency structure in graphical data via message passing among the graph nodes.
A main challenge in studying GNN explainability is to provide fidelity measures that evaluate the performance of these explanation functions.
This paper studies this foundational challenge, spotlighting the inherent limitations of prevailing fidelity metrics.
arXiv Detail & Related papers (2023-10-03T06:25:14Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Generalizing Graph Neural Networks on Out-Of-Distribution Graphs [51.33152272781324]
Graph Neural Networks (GNNs) are proposed without considering the distribution shifts between training and testing graphs.
In such a setting, GNNs tend to exploit subtle statistical correlations existing in the training set for predictions, even though it is a spurious correlation.
We propose a general causal representation framework, called StableGNN, to eliminate the impact of spurious correlations.
arXiv Detail & Related papers (2021-11-20T18:57:18Z) - Training Stable Graph Neural Networks Through Constrained Learning [116.03137405192356]
Graph Neural Networks (GNNs) rely on graph convolutions to learn features from network data.
GNNs are stable to different types of perturbations of the underlying graph, a property that they inherit from graph filters.
We propose a novel constrained learning approach by imposing a constraint on the stability condition of the GNN within a perturbation of choice.
arXiv Detail & Related papers (2021-10-07T15:54:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.