Explainability in Graph Neural Networks: An Experimental Survey
- URL: http://arxiv.org/abs/2203.09258v1
- Date: Thu, 17 Mar 2022 11:25:41 GMT
- Title: Explainability in Graph Neural Networks: An Experimental Survey
- Authors: Peibo Li, Yixing Yang, Maurice Pagnucco, Yang Song
- Abstract summary: Graph neural networks (GNNs) have been extensively developed for graph representation learning.
GNNs suffer from the black-box problem as people cannot understand the mechanism underlying them.
Several GNN explainability methods have been proposed to explain the decisions made by GNNs.
- Score: 12.440636971075977
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks (GNNs) have been extensively developed for graph
representation learning in various application domains. However, similar to all
other neural networks models, GNNs suffer from the black-box problem as people
cannot understand the mechanism underlying them. To solve this problem, several
GNN explainability methods have been proposed to explain the decisions made by
GNNs. In this survey, we give an overview of the state-of-the-art GNN
explainability methods and how they are evaluated. Furthermore, we propose a
new evaluation metric and conduct thorough experiments to compare GNN
explainability methods on real world datasets. We also suggest future
directions for GNN explainability.
Related papers
- Information Flow in Graph Neural Networks: A Clinical Triage Use Case [49.86931948849343]
Graph Neural Networks (GNNs) have gained popularity in healthcare and other domains due to their ability to process multi-modal and multi-relational graphs.
We investigate how the flow of embedding information within GNNs affects the prediction of links in Knowledge Graphs (KGs)
Our results demonstrate that incorporating domain knowledge into the GNN connectivity leads to better performance than using the same connectivity as the KG or allowing unconstrained embedding propagation.
arXiv Detail & Related papers (2023-09-12T09:18:12Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Explainability in subgraphs-enhanced Graph Neural Networks [12.526174412246107]
Subgraphs-enhanced Graph Neural Networks (SGNNs) have been introduced to enhance the expressive power of GNNs.
In this work, we adapt PGExplainer, one of the most recent explainers for GNNs, to SGNNs.
We show that our framework is successful in explaining the decision process of an SGNN on graph classification tasks.
arXiv Detail & Related papers (2022-09-16T13:39:10Z) - A Survey of Explainable Graph Neural Networks: Taxonomy and Evaluation
Metrics [8.795591344648294]
We focus on explainable graph neural networks and categorize them based on the use of explainable methods.
We provide the common performance metrics for GNNs explanations and point out several future research directions.
arXiv Detail & Related papers (2022-07-26T01:45:54Z) - Toward the Analysis of Graph Neural Networks [1.0412114420493723]
Graph Neural Networks (GNNs) have emerged as a robust framework for graph-structured data analysis.
This paper proposes an approach to analyze GNNs by converting them into Feed Forward Neural Networks (FFNNs) and reusing existing FFNNs analyses.
arXiv Detail & Related papers (2022-01-01T04:59:49Z) - ProtGNN: Towards Self-Explaining Graph Neural Networks [12.789013658551454]
We propose Prototype Graph Neural Network (ProtGNN), which combines prototype learning with GNNs.
ProtGNN and ProtGNN+ can provide inherent interpretability while achieving accuracy on par with the non-interpretable counterparts.
arXiv Detail & Related papers (2021-12-02T01:16:29Z) - Jointly Attacking Graph Neural Network and its Explanations [50.231829335996814]
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks.
Recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs' prediction by modifying graphs.
We propose a novel attack framework (GEAttack) which can attack both a GNN model and its explanations by simultaneously exploiting their vulnerabilities.
arXiv Detail & Related papers (2021-08-07T07:44:33Z) - Explainability in Graph Neural Networks: A Taxonomic Survey [42.95574260417341]
Graph neural networks (GNNs) and their explainability are experiencing rapid developments.
There is neither a unified treatment of GNN explainability methods, nor a standard benchmark and testbed for evaluations.
This work provides a unified methodological treatment of GNN explainability and a standardized testbed for evaluations.
arXiv Detail & Related papers (2020-12-31T04:34:27Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z) - Graph Neural Networks for Motion Planning [108.51253840181677]
We present two techniques, GNNs over dense fixed graphs for low-dimensional problems and sampling-based GNNs for high-dimensional problems.
We examine the ability of a GNN to tackle planning problems such as identifying critical nodes or learning the sampling distribution in Rapidly-exploring Random Trees (RRT)
Experiments with critical sampling, a pendulum and a six DoF robot arm show GNNs improve on traditional analytic methods as well as learning approaches using fully-connected or convolutional neural networks.
arXiv Detail & Related papers (2020-06-11T08:19:06Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.