GraphGI:A GNN Explanation Method using Game Interaction
- URL: http://arxiv.org/abs/2409.15698v1
- Date: Tue, 24 Sep 2024 03:24:31 GMT
- Title: GraphGI:A GNN Explanation Method using Game Interaction
- Authors: Xingping Xian, Jianlu Liu, Tao Wu, Lin Yuan, Chao Wang, Baiyun Chen,
- Abstract summary: Graph Neural Networks (GNNs) have garnered significant attention and have been extensively utilized across various domains.
Current graph explanation techniques focus on identifying key nodes or edges, attributing the critical data features that drive model predictions.
We propose a novel explanatory method GraphGI, which identifies the coalition with the highest interaction strength and presents it as an explanatory subgraph.
- Score: 5.149896909638598
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have garnered significant attention and have been extensively utilized across various domains. However, similar to other deep learning models, GNNs are often viewed as black-box models, making it challenging to interpret their prediction mechanisms. Current graph explanation techniques focus on identifying key nodes or edges, attributing the critical data features that drive model predictions. Nevertheless, these features do not independently influence the model's outcomes; rather, they interact with one another to collectively affect predictions. In this work, we propose a novel explanatory method GraphGI, which identifies the coalition with the highest interaction strength and presents it as an explanatory subgraph. Given a trained model and an input graph, our method explains predictions by gradually incorporating significant edges into the selected subgraph. We utilize game-theoretic interaction values to assess the interaction strength after edge additions, ensuring that the newly added edges confer maximum interaction strength to the explanatory subgraph. To enhance computational efficiency, we adopt effective approximation techniques for calculating Shapley values and game-theoretic interaction values. Empirical evaluations demonstrate that our method achieves superior fidelity and sparsity, maintaining the interpretability of the results at a comprehensible level.
Related papers
- Oversmoothing as Loss of Sign: Towards Structural Balance in Graph Neural Networks [54.62268052283014]
Oversmoothing is a common issue in graph neural networks (GNNs)
Three major classes of anti-oversmoothing techniques can be mathematically interpreted as message passing over signed graphs.
Negative edges can repel nodes to a certain extent, providing deeper insights into how these methods mitigate oversmoothing.
arXiv Detail & Related papers (2025-02-17T03:25:36Z) - Towards Fine-Grained Explainability for Heterogeneous Graph Neural
Network [20.86967051637891]
Heterogeneous graph neural networks (HGNs) are prominent approaches to node classification tasks on heterogeneous graphs.
Existing explainability techniques are mainly proposed for GNNs on homogeneous graphs.
We develop xPath, a new framework that provides fine-grained explanations for black-box HGNs.
arXiv Detail & Related papers (2023-12-23T12:13:23Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Faithful and Consistent Graph Neural Network Explanations with Rationale
Alignment [38.66324833510402]
Instance-level GNN explanation aims to discover critical input elements, like nodes or edges, that the target GNN relies upon for making predictions.
Various algorithms are proposed, most of them formalize this task by searching the minimal subgraph which can preserve original predictions.
Several subgraphs can result in the same or similar outputs as the original graphs.
Applying them to explain weakly-performed GNNs would further amplify these issues.
arXiv Detail & Related papers (2023-01-07T06:33:35Z) - Rethinking Explaining Graph Neural Networks via Non-parametric Subgraph
Matching [68.35685422301613]
We propose a novel non-parametric subgraph matching framework, dubbed MatchExplainer, to explore explanatory subgraphs.
It couples the target graph with other counterpart instances and identifies the most crucial joint substructure by minimizing the node corresponding-based distance.
Experiments on synthetic and real-world datasets show the effectiveness of our MatchExplainer by outperforming all state-of-the-art parametric baselines with significant margins.
arXiv Detail & Related papers (2023-01-07T05:14:45Z) - Reinforced Causal Explainer for Graph Neural Networks [112.57265240212001]
Explainability is crucial for probing graph neural networks (GNNs)
We propose a reinforcement learning agent, Reinforced Causal Explainer (RC-Explainer)
RC-Explainer generates faithful and concise explanations, and has a better power to unseen graphs.
arXiv Detail & Related papers (2022-04-23T09:13:25Z) - OrphicX: A Causality-Inspired Latent Variable Model for Interpreting
Graph Neural Networks [42.539085765796976]
This paper proposes a new eXplanation framework, called OrphicX, for generating causal explanations for graph neural networks (GNNs)
We construct a distinct generative model and design an objective function that encourages the generative model to produce causal, compact, and faithful explanations.
We show that OrphicX can effectively identify the causal semantics for generating causal explanations, significantly outperforming its alternatives.
arXiv Detail & Related papers (2022-03-29T03:08:33Z) - Deconfounding to Explanation Evaluation in Graph Neural Networks [136.73451468551656]
We argue that a distribution shift exists between the full graph and the subgraph, causing the out-of-distribution problem.
We propose Deconfounded Subgraph Evaluation (DSE) which assesses the causal effect of an explanatory subgraph on the model prediction.
arXiv Detail & Related papers (2022-01-21T18:05:00Z) - Jointly Attacking Graph Neural Network and its Explanations [50.231829335996814]
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks.
Recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs' prediction by modifying graphs.
We propose a novel attack framework (GEAttack) which can attack both a GNN model and its explanations by simultaneously exploiting their vulnerabilities.
arXiv Detail & Related papers (2021-08-07T07:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.