GISExplainer: On Explainability of Graph Neural Networks via Game-theoretic Interaction Subgraphs
- URL: http://arxiv.org/abs/2409.15698v2
- Date: Mon, 30 Dec 2024 13:28:24 GMT
- Title: GISExplainer: On Explainability of Graph Neural Networks via Game-theoretic Interaction Subgraphs
- Authors: Xingping Xian, Jianlu Liu, Chao Wang, Tao Wu, Shaojie Qiao, Xiaochuan Tang, Qun Liu,
- Abstract summary: GISExplainer is a novel game-theoretic interaction based explanation method.
It uncovers what the underlying GNNs have learned for node classification by discovering human-interpretable causal explanatory subgraphs.
Extensive experiments demonstrate that GISExplainer achieves better performance than state-of-the-art approaches.
- Score: 21.012180171806456
- License:
- Abstract: Explainability is crucial for the application of black-box Graph Neural Networks (GNNs) in critical fields such as healthcare, finance, cybersecurity, and more. Various feature attribution methods, especially the perturbation-based methods, have been proposed to indicate how much each node/edge contributes to the model predictions. However, these methods fail to generate connected explanatory subgraphs that consider the causal interaction between edges within different coalition scales, which will result in unfaithful explanations. In our study, we propose GISExplainer, a novel game-theoretic interaction based explanation method that uncovers what the underlying GNNs have learned for node classification by discovering human-interpretable causal explanatory subgraphs. First, GISExplainer defines a causal attribution mechanism that considers the game-theoretic interaction of multi-granularity coalitions in candidate explanatory subgraph to quantify the causal effect of an edge on the prediction. Second, GISExplainer assumes that the coalitions with negative effects on the predictions are also significant for model interpretation, and the contribution of the computation graph stems from the combined influence of both positive and negative interactions within the coalitions. Then, GISExplainer regards the explanation task as a sequential decision process, in which a salient edges is successively selected and connected to the previously selected subgraph based on its causal effect to form an explanatory subgraph, ultimately striving for better explanations. Additionally, an efficiency optimization scheme is proposed for the causal attribution mechanism through coalition sampling. Extensive experiments demonstrate that GISExplainer achieves better performance than state-of-the-art approaches w.r.t. two quantitative metrics: Fidelity and Sparsity.
Related papers
- Oversmoothing as Loss of Sign: Towards Structural Balance in Graph Neural Networks [54.62268052283014]
Oversmoothing is a common issue in graph neural networks (GNNs)
Three major classes of anti-oversmoothing techniques can be mathematically interpreted as message passing over signed graphs.
Negative edges can repel nodes to a certain extent, providing deeper insights into how these methods mitigate oversmoothing.
arXiv Detail & Related papers (2025-02-17T03:25:36Z) - Towards Fine-Grained Explainability for Heterogeneous Graph Neural
Network [20.86967051637891]
Heterogeneous graph neural networks (HGNs) are prominent approaches to node classification tasks on heterogeneous graphs.
Existing explainability techniques are mainly proposed for GNNs on homogeneous graphs.
We develop xPath, a new framework that provides fine-grained explanations for black-box HGNs.
arXiv Detail & Related papers (2023-12-23T12:13:23Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Faithful and Consistent Graph Neural Network Explanations with Rationale
Alignment [38.66324833510402]
Instance-level GNN explanation aims to discover critical input elements, like nodes or edges, that the target GNN relies upon for making predictions.
Various algorithms are proposed, most of them formalize this task by searching the minimal subgraph which can preserve original predictions.
Several subgraphs can result in the same or similar outputs as the original graphs.
Applying them to explain weakly-performed GNNs would further amplify these issues.
arXiv Detail & Related papers (2023-01-07T06:33:35Z) - Rethinking Explaining Graph Neural Networks via Non-parametric Subgraph
Matching [68.35685422301613]
We propose a novel non-parametric subgraph matching framework, dubbed MatchExplainer, to explore explanatory subgraphs.
It couples the target graph with other counterpart instances and identifies the most crucial joint substructure by minimizing the node corresponding-based distance.
Experiments on synthetic and real-world datasets show the effectiveness of our MatchExplainer by outperforming all state-of-the-art parametric baselines with significant margins.
arXiv Detail & Related papers (2023-01-07T05:14:45Z) - Reinforced Causal Explainer for Graph Neural Networks [112.57265240212001]
Explainability is crucial for probing graph neural networks (GNNs)
We propose a reinforcement learning agent, Reinforced Causal Explainer (RC-Explainer)
RC-Explainer generates faithful and concise explanations, and has a better power to unseen graphs.
arXiv Detail & Related papers (2022-04-23T09:13:25Z) - OrphicX: A Causality-Inspired Latent Variable Model for Interpreting
Graph Neural Networks [42.539085765796976]
This paper proposes a new eXplanation framework, called OrphicX, for generating causal explanations for graph neural networks (GNNs)
We construct a distinct generative model and design an objective function that encourages the generative model to produce causal, compact, and faithful explanations.
We show that OrphicX can effectively identify the causal semantics for generating causal explanations, significantly outperforming its alternatives.
arXiv Detail & Related papers (2022-03-29T03:08:33Z) - Deconfounding to Explanation Evaluation in Graph Neural Networks [136.73451468551656]
We argue that a distribution shift exists between the full graph and the subgraph, causing the out-of-distribution problem.
We propose Deconfounded Subgraph Evaluation (DSE) which assesses the causal effect of an explanatory subgraph on the model prediction.
arXiv Detail & Related papers (2022-01-21T18:05:00Z) - Jointly Attacking Graph Neural Network and its Explanations [50.231829335996814]
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks.
Recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs' prediction by modifying graphs.
We propose a novel attack framework (GEAttack) which can attack both a GNN model and its explanations by simultaneously exploiting their vulnerabilities.
arXiv Detail & Related papers (2021-08-07T07:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.