GISExplainer: On Explainability of Graph Neural Networks via Game-theoretic Interaction Subgraphs
- URL: http://arxiv.org/abs/2409.15698v2
- Date: Mon, 30 Dec 2024 13:28:24 GMT
- Title: GISExplainer: On Explainability of Graph Neural Networks via Game-theoretic Interaction Subgraphs
- Authors: Xingping Xian, Jianlu Liu, Chao Wang, Tao Wu, Shaojie Qiao, Xiaochuan Tang, Qun Liu,
- Abstract summary: GISExplainer is a novel game-theoretic interaction based explanation method.<n>It uncovers what the underlying GNNs have learned for node classification by discovering human-interpretable causal explanatory subgraphs.<n>Extensive experiments demonstrate that GISExplainer achieves better performance than state-of-the-art approaches.
- Score: 21.012180171806456
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explainability is crucial for the application of black-box Graph Neural Networks (GNNs) in critical fields such as healthcare, finance, cybersecurity, and more. Various feature attribution methods, especially the perturbation-based methods, have been proposed to indicate how much each node/edge contributes to the model predictions. However, these methods fail to generate connected explanatory subgraphs that consider the causal interaction between edges within different coalition scales, which will result in unfaithful explanations. In our study, we propose GISExplainer, a novel game-theoretic interaction based explanation method that uncovers what the underlying GNNs have learned for node classification by discovering human-interpretable causal explanatory subgraphs. First, GISExplainer defines a causal attribution mechanism that considers the game-theoretic interaction of multi-granularity coalitions in candidate explanatory subgraph to quantify the causal effect of an edge on the prediction. Second, GISExplainer assumes that the coalitions with negative effects on the predictions are also significant for model interpretation, and the contribution of the computation graph stems from the combined influence of both positive and negative interactions within the coalitions. Then, GISExplainer regards the explanation task as a sequential decision process, in which a salient edges is successively selected and connected to the previously selected subgraph based on its causal effect to form an explanatory subgraph, ultimately striving for better explanations. Additionally, an efficiency optimization scheme is proposed for the causal attribution mechanism through coalition sampling. Extensive experiments demonstrate that GISExplainer achieves better performance than state-of-the-art approaches w.r.t. two quantitative metrics: Fidelity and Sparsity.
Related papers
- Oversmoothing as Loss of Sign: Towards Structural Balance in Graph Neural Networks [54.62268052283014]
Oversmoothing is a common issue in graph neural networks (GNNs)
Three major classes of anti-oversmoothing techniques can be mathematically interpreted as message passing over signed graphs.
Negative edges can repel nodes to a certain extent, providing deeper insights into how these methods mitigate oversmoothing.
arXiv Detail & Related papers (2025-02-17T03:25:36Z) - TANGNN: a Concise, Scalable and Effective Graph Neural Networks with Top-m Attention Mechanism for Graph Representation Learning [7.879217146851148]
We propose an innovative Graph Neural Network (GNN) architecture that integrates a Top-m attention mechanism aggregation component and a neighborhood aggregation component.
To assess the effectiveness of our proposed model, we have applied it to citation sentiment prediction, a novel task previously unexplored in the GNN field.
arXiv Detail & Related papers (2024-11-23T05:31:25Z) - Towards Fine-Grained Explainability for Heterogeneous Graph Neural
Network [20.86967051637891]
Heterogeneous graph neural networks (HGNs) are prominent approaches to node classification tasks on heterogeneous graphs.
Existing explainability techniques are mainly proposed for GNNs on homogeneous graphs.
We develop xPath, a new framework that provides fine-grained explanations for black-box HGNs.
arXiv Detail & Related papers (2023-12-23T12:13:23Z) - Semantic Interpretation and Validation of Graph Attention-based
Explanations for GNN Models [9.260186030255081]
We propose a methodology for investigating the use of semantic attention to enhance the explainability of Graph Neural Network (GNN)-based models.
Our work extends existing attention-based graph explainability methods by analysing the divergence in the attention distributions in relation to semantically sorted feature sets.
We apply our methodology on a lidar pointcloud estimation model successfully identifying key semantic classes that contribute to enhanced performance.
arXiv Detail & Related papers (2023-08-08T12:34:32Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Faithful and Consistent Graph Neural Network Explanations with Rationale
Alignment [38.66324833510402]
Instance-level GNN explanation aims to discover critical input elements, like nodes or edges, that the target GNN relies upon for making predictions.
Various algorithms are proposed, most of them formalize this task by searching the minimal subgraph which can preserve original predictions.
Several subgraphs can result in the same or similar outputs as the original graphs.
Applying them to explain weakly-performed GNNs would further amplify these issues.
arXiv Detail & Related papers (2023-01-07T06:33:35Z) - Rethinking Explaining Graph Neural Networks via Non-parametric Subgraph
Matching [68.35685422301613]
We propose a novel non-parametric subgraph matching framework, dubbed MatchExplainer, to explore explanatory subgraphs.
It couples the target graph with other counterpart instances and identifies the most crucial joint substructure by minimizing the node corresponding-based distance.
Experiments on synthetic and real-world datasets show the effectiveness of our MatchExplainer by outperforming all state-of-the-art parametric baselines with significant margins.
arXiv Detail & Related papers (2023-01-07T05:14:45Z) - On the Ability of Graph Neural Networks to Model Interactions Between
Vertices [14.909298522361306]
Graph neural networks (GNNs) are widely used for modeling complex interactions between entities represented as vertices of a graph.
Despite recent efforts to theoretically analyze the expressive power of GNNs, a formal characterization of their ability to model interactions is lacking.
arXiv Detail & Related papers (2022-11-29T18:58:07Z) - Robust Causal Graph Representation Learning against Confounding Effects [21.380907101361643]
We propose Robust Causal Graph Representation Learning (RCGRL) to learn robust graph representations against confounding effects.
RCGRL introduces an active approach to generate instrumental variables under unconditional moment restrictions, which empowers the graph representation learning model to eliminate confounders.
arXiv Detail & Related papers (2022-08-18T01:31:25Z) - A Graph-Enhanced Click Model for Web Search [67.27218481132185]
We propose a novel graph-enhanced click model (GraphCM) for web search.
We exploit both intra-session and inter-session information for the sparsity and cold-start problems.
arXiv Detail & Related papers (2022-06-17T08:32:43Z) - Discovering the Representation Bottleneck of Graph Neural Networks from
Multi-order Interactions [51.597480162777074]
Graph neural networks (GNNs) rely on the message passing paradigm to propagate node features and build interactions.
Recent works point out that different graph learning tasks require different ranges of interactions between nodes.
We study two common graph construction methods in scientific domains, i.e., emphK-nearest neighbor (KNN) graphs and emphfully-connected (FC) graphs.
arXiv Detail & Related papers (2022-05-15T11:38:14Z) - Reinforced Causal Explainer for Graph Neural Networks [112.57265240212001]
Explainability is crucial for probing graph neural networks (GNNs)
We propose a reinforcement learning agent, Reinforced Causal Explainer (RC-Explainer)
RC-Explainer generates faithful and concise explanations, and has a better power to unseen graphs.
arXiv Detail & Related papers (2022-04-23T09:13:25Z) - OrphicX: A Causality-Inspired Latent Variable Model for Interpreting
Graph Neural Networks [42.539085765796976]
This paper proposes a new eXplanation framework, called OrphicX, for generating causal explanations for graph neural networks (GNNs)
We construct a distinct generative model and design an objective function that encourages the generative model to produce causal, compact, and faithful explanations.
We show that OrphicX can effectively identify the causal semantics for generating causal explanations, significantly outperforming its alternatives.
arXiv Detail & Related papers (2022-03-29T03:08:33Z) - Deconfounding to Explanation Evaluation in Graph Neural Networks [136.73451468551656]
We argue that a distribution shift exists between the full graph and the subgraph, causing the out-of-distribution problem.
We propose Deconfounded Subgraph Evaluation (DSE) which assesses the causal effect of an explanatory subgraph on the model prediction.
arXiv Detail & Related papers (2022-01-21T18:05:00Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Neural Belief Propagation for Scene Graph Generation [31.9682610869767]
We propose a novel neural belief propagation method to generate the resulting scene graph.
It employs a structural Bethe approximation rather than the mean field approximation to infer the associated marginals.
It achieves the state-of-the-art performance on various popular scene graph generation benchmarks.
arXiv Detail & Related papers (2021-12-10T18:30:27Z) - Jointly Attacking Graph Neural Network and its Explanations [50.231829335996814]
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks.
Recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs' prediction by modifying graphs.
We propose a novel attack framework (GEAttack) which can attack both a GNN model and its explanations by simultaneously exploiting their vulnerabilities.
arXiv Detail & Related papers (2021-08-07T07:44:33Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - Learning to Extrapolate Knowledge: Transductive Few-shot Out-of-Graph
Link Prediction [69.1473775184952]
We introduce a realistic problem of few-shot out-of-graph link prediction.
We tackle this problem with a novel transductive meta-learning framework.
We validate our model on multiple benchmark datasets for knowledge graph completion and drug-drug interaction prediction.
arXiv Detail & Related papers (2020-06-11T17:42:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.