A Survey of Explainable Graph Neural Networks: Taxonomy and Evaluation
Metrics
- URL: http://arxiv.org/abs/2207.12599v2
- Date: Tue, 23 May 2023 03:32:23 GMT
- Title: A Survey of Explainable Graph Neural Networks: Taxonomy and Evaluation
Metrics
- Authors: Yiqiao Li and Jianlong Zhou and Sunny Verma and Fang Chen
- Abstract summary: We focus on explainable graph neural networks and categorize them based on the use of explainable methods.
We provide the common performance metrics for GNNs explanations and point out several future research directions.
- Score: 8.795591344648294
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) have demonstrated a significant boost in
prediction performance on graph data. At the same time, the predictions made by
these models are often hard to interpret. In that regard, many efforts have
been made to explain the prediction mechanisms of these models from
perspectives such as GNNExplainer, XGNN and PGExplainer. Although such works
present systematic frameworks to interpret GNNs, a holistic review for
explainable GNNs is unavailable. In this survey, we present a comprehensive
review of explainability techniques developed for GNNs. We focus on explainable
graph neural networks and categorize them based on the use of explainable
methods. We further provide the common performance metrics for GNNs
explanations and point out several future research directions.
Related papers
- A Manifold Perspective on the Statistical Generalization of Graph Neural Networks [84.01980526069075]
Graph Neural Networks (GNNs) combine information from adjacent nodes by successive applications of graph convolutions.
We study the generalization gaps of GNNs on both node-level and graph-level tasks.
We show that the generalization gaps decrease with the number of nodes in the training graphs.
arXiv Detail & Related papers (2024-06-07T19:25:02Z) - Uncertainty in Graph Neural Networks: A Survey [50.63474656037679]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.
However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.
This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - Incorporating Retrieval-based Causal Learning with Information
Bottlenecks for Interpretable Graph Neural Networks [12.892400744247565]
We develop a novel interpretable causal GNN framework that incorporates retrieval-based causal learning with Graph Information Bottleneck (GIB) theory.
We achieve 32.71% higher precision on real-world explanation scenarios with diverse explanation types.
arXiv Detail & Related papers (2024-02-07T09:57:39Z) - A Survey on Explainability of Graph Neural Networks [4.612101932762187]
Graph neural networks (GNNs) are powerful graph-based deep-learning models.
This survey aims to provide a comprehensive overview of the existing explainability techniques for GNNs.
arXiv Detail & Related papers (2023-06-02T23:36:49Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - GANExplainer: GAN-based Graph Neural Networks Explainer [5.641321839562139]
It is critical to explain why graph neural network (GNN) makes particular predictions for them to be believed in many applications.
We propose GANExplainer, based on Generative Adversarial Network (GAN) architecture.
GANExplainer improves explanation accuracy by up to 35% compared to its alternatives.
arXiv Detail & Related papers (2022-12-30T23:11:24Z) - Explainability in subgraphs-enhanced Graph Neural Networks [12.526174412246107]
Subgraphs-enhanced Graph Neural Networks (SGNNs) have been introduced to enhance the expressive power of GNNs.
In this work, we adapt PGExplainer, one of the most recent explainers for GNNs, to SGNNs.
We show that our framework is successful in explaining the decision process of an SGNN on graph classification tasks.
arXiv Detail & Related papers (2022-09-16T13:39:10Z) - Explainability in Graph Neural Networks: An Experimental Survey [12.440636971075977]
Graph neural networks (GNNs) have been extensively developed for graph representation learning.
GNNs suffer from the black-box problem as people cannot understand the mechanism underlying them.
Several GNN explainability methods have been proposed to explain the decisions made by GNNs.
arXiv Detail & Related papers (2022-03-17T11:25:41Z) - Towards the Explanation of Graph Neural Networks in Digital Pathology
with Information Flows [67.23405590815602]
Graph Neural Networks (GNNs) are widely adopted in digital pathology.
Existing explainers discover an explanatory subgraph relevant to the prediction.
An explanatory subgraph should be not only necessary for prediction, but also sufficient to uncover the most predictive regions.
We propose IFEXPLAINER, which generates a necessary and sufficient explanation for GNNs.
arXiv Detail & Related papers (2021-12-18T10:19:01Z) - Jointly Attacking Graph Neural Network and its Explanations [50.231829335996814]
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks.
Recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs' prediction by modifying graphs.
We propose a novel attack framework (GEAttack) which can attack both a GNN model and its explanations by simultaneously exploiting their vulnerabilities.
arXiv Detail & Related papers (2021-08-07T07:44:33Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.