Explainability in Graph Neural Networks: A Taxonomic Survey
- URL: http://arxiv.org/abs/2012.15445v2
- Date: Thu, 25 Mar 2021 17:30:12 GMT
- Title: Explainability in Graph Neural Networks: A Taxonomic Survey
- Authors: Hao Yuan, Haiyang Yu, Shurui Gui, and Shuiwang Ji
- Abstract summary: Graph neural networks (GNNs) and their explainability are experiencing rapid developments.
There is neither a unified treatment of GNN explainability methods, nor a standard benchmark and testbed for evaluations.
This work provides a unified methodological treatment of GNN explainability and a standardized testbed for evaluations.
- Score: 42.95574260417341
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning methods are achieving ever-increasing performance on many
artificial intelligence tasks. A major limitation of deep models is that they
are not amenable to interpretability. This limitation can be circumvented by
developing post hoc techniques to explain the predictions, giving rise to the
area of explainability. Recently, explainability of deep models on images and
texts has achieved significant progress. In the area of graph data, graph
neural networks (GNNs) and their explainability are experiencing rapid
developments. However, there is neither a unified treatment of GNN
explainability methods, nor a standard benchmark and testbed for evaluations.
In this survey, we provide a unified and taxonomic view of current GNN
explainability methods. Our unified and taxonomic treatments of this subject
shed lights on the commonalities and differences of existing methods and set
the stage for further methodological developments. To facilitate evaluations,
we generate a set of benchmark graph datasets specifically for GNN
explainability. We summarize current datasets and metrics for evaluating GNN
explainability. Altogether, this work provides a unified methodological
treatment of GNN explainability and a standardized testbed for evaluations.
Related papers
- Uncertainty in Graph Neural Networks: A Survey [50.63474656037679]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.
However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.
This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - A Survey on Explainability of Graph Neural Networks [4.612101932762187]
Graph neural networks (GNNs) are powerful graph-based deep-learning models.
This survey aims to provide a comprehensive overview of the existing explainability techniques for GNNs.
arXiv Detail & Related papers (2023-06-02T23:36:49Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - A Survey of Explainable Graph Neural Networks: Taxonomy and Evaluation
Metrics [8.795591344648294]
We focus on explainable graph neural networks and categorize them based on the use of explainable methods.
We provide the common performance metrics for GNNs explanations and point out several future research directions.
arXiv Detail & Related papers (2022-07-26T01:45:54Z) - Explainability in Graph Neural Networks: An Experimental Survey [12.440636971075977]
Graph neural networks (GNNs) have been extensively developed for graph representation learning.
GNNs suffer from the black-box problem as people cannot understand the mechanism underlying them.
Several GNN explainability methods have been proposed to explain the decisions made by GNNs.
arXiv Detail & Related papers (2022-03-17T11:25:41Z) - Task-Agnostic Graph Explanations [50.17442349253348]
Graph Neural Networks (GNNs) have emerged as powerful tools to encode graph structured data.
Existing learning-based GNN explanation approaches are task-specific in training.
We propose a Task-Agnostic GNN Explainer (TAGE) trained under self-supervision with no knowledge of downstream tasks.
arXiv Detail & Related papers (2022-02-16T21:11:47Z) - Edge-Level Explanations for Graph Neural Networks by Extending
Explainability Methods for Convolutional Neural Networks [33.20913249848369]
Graph Neural Networks (GNNs) are deep learning models that take graph data as inputs, and they are applied to various tasks such as traffic prediction and molecular property prediction.
We extend explainability methods for CNNs, such as Local Interpretable Model-Agnostic Explanations (LIME), Gradient-Based Saliency Maps, and Gradient-Weighted Class Activation Mapping (Grad-CAM) to GNNs.
The experimental results indicate that the LIME-based approach is the most efficient explainability method for multiple tasks in the real-world situation, outperforming even the state-of-the
arXiv Detail & Related papers (2021-11-01T06:27:29Z) - SEEN: Sharpening Explanations for Graph Neural Networks using
Explanations from Neighborhoods [0.0]
We propose a method to improve the explanation quality of node classification tasks through aggregation of auxiliary explanations.
Applying SEEN does not require modification of a graph and can be used with diverse explainability techniques.
Experiments on matching motif-participating nodes from a given graph show great improvement in explanation accuracy of up to 12.71%.
arXiv Detail & Related papers (2021-06-16T03:04:46Z) - Fast Learning of Graph Neural Networks with Guaranteed Generalizability:
One-hidden-layer Case [93.37576644429578]
Graph neural networks (GNNs) have made great progress recently on learning from graph-structured data in practice.
We provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.
arXiv Detail & Related papers (2020-06-25T00:45:52Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.