Generative Explanations for Graph Neural Network: Methods and
Evaluations
- URL: http://arxiv.org/abs/2311.05764v1
- Date: Thu, 9 Nov 2023 22:07:15 GMT
- Title: Generative Explanations for Graph Neural Network: Methods and
Evaluations
- Authors: Jialin Chen, Kenza Amara, Junchi Yu, Rex Ying
- Abstract summary: Graph Neural Networks (GNNs) achieve state-of-the-art performance in various graph-related tasks.
The black-box nature of GNNs limits their interpretability and trustworthiness.
Numerous explainability methods have been proposed to uncover the decision-making logic of GNNs.
- Score: 16.67839967139831
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) achieve state-of-the-art performance in various
graph-related tasks. However, the black-box nature often limits their
interpretability and trustworthiness. Numerous explainability methods have been
proposed to uncover the decision-making logic of GNNs, by generating underlying
explanatory substructures. In this paper, we conduct a comprehensive review of
the existing explanation methods for GNNs from the perspective of graph
generation. Specifically, we propose a unified optimization objective for
generative explanation methods, comprising two sub-objectives: Attribution and
Information constraints. We further demonstrate their specific manifestations
in various generative model architectures and different explanation scenarios.
With the unified objective of the explanation problem, we reveal the shared
characteristics and distinctions among current methods, laying the foundation
for future methodological advancements. Empirical results demonstrate the
advantages and limitations of different explainability approaches in terms of
explanation performance, efficiency, and generalizability.
Related papers
- Factorized Explainer for Graph Neural Networks [7.382632811417645]
Graph Neural Networks (GNNs) have received increasing attention due to their ability to learn from graph-structured data.
Post-hoc instance-level explanation methods have been proposed to understand GNN predictions.
We introduce a novel factorized explanation model with theoretical performance guarantees.
arXiv Detail & Related papers (2023-12-09T15:29:45Z) - Learning How to Propagate Messages in Graph Neural Networks [55.2083896686782]
This paper studies the problem of learning message propagation strategies for graph neural networks (GNNs)
We introduce the optimal propagation steps as latent variables to help find the maximum-likelihood estimation of the GNN parameters.
Our proposed framework can effectively learn personalized and interpretable propagate strategies of messages in GNNs.
arXiv Detail & Related papers (2023-10-01T15:09:59Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Faithful Explanations for Deep Graph Models [44.3056871040946]
This paper studies faithful explanations for Graph Neural Networks (GNNs)
It applies to existing explanation methods, including feature attributions and subgraph explanations.
Third, we introduce emphk-hop Explanation with a Convolutional Core (KEC), a new explanation method that provably maximizes faithfulness to the original GNN.
arXiv Detail & Related papers (2022-05-24T07:18:56Z) - Towards Explanation for Unsupervised Graph-Level Representation Learning [108.31036962735911]
Existing explanation methods focus on the supervised settings, eg, node classification and graph classification, while the explanation for unsupervised graph-level representation learning is still unexplored.
In this paper, we advance the Information Bottleneck principle (IB) to tackle the proposed explanation problem for unsupervised graph representations, which leads to a novel principle, textitUnsupervised Subgraph Information Bottleneck (USIB)
We also theoretically analyze the connection between graph representations and explanatory subgraphs on the label space, which reveals that the robustness of representations benefit the fidelity of explanatory subgraphs.
arXiv Detail & Related papers (2022-05-20T02:50:15Z) - Task-Agnostic Graph Explanations [50.17442349253348]
Graph Neural Networks (GNNs) have emerged as powerful tools to encode graph structured data.
Existing learning-based GNN explanation approaches are task-specific in training.
We propose a Task-Agnostic GNN Explainer (TAGE) trained under self-supervision with no knowledge of downstream tasks.
arXiv Detail & Related papers (2022-02-16T21:11:47Z) - Towards a Rigorous Theoretical Analysis and Evaluation of GNN
Explanations [25.954303305216094]
We introduce the first axiomatic framework for theoretically analyzing, evaluating, and comparing state-of-the-art GNN explanation methods.
We leverage these properties to present the first ever theoretical analysis of the effectiveness of state-of-the-art GNN explanation methods.
arXiv Detail & Related papers (2021-06-16T18:38:30Z) - SEEN: Sharpening Explanations for Graph Neural Networks using
Explanations from Neighborhoods [0.0]
We propose a method to improve the explanation quality of node classification tasks through aggregation of auxiliary explanations.
Applying SEEN does not require modification of a graph and can be used with diverse explainability techniques.
Experiments on matching motif-participating nodes from a given graph show great improvement in explanation accuracy of up to 12.71%.
arXiv Detail & Related papers (2021-06-16T03:04:46Z) - Explainability in Graph Neural Networks: A Taxonomic Survey [42.95574260417341]
Graph neural networks (GNNs) and their explainability are experiencing rapid developments.
There is neither a unified treatment of GNN explainability methods, nor a standard benchmark and testbed for evaluations.
This work provides a unified methodological treatment of GNN explainability and a standardized testbed for evaluations.
arXiv Detail & Related papers (2020-12-31T04:34:27Z) - Parameterized Explainer for Graph Neural Network [49.79917262156429]
We propose PGExplainer, a parameterized explainer for Graph Neural Networks (GNNs)
Compared to the existing work, PGExplainer has better generalization ability and can be utilized in an inductive setting easily.
Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7% relative improvement in AUC on explaining graph classification.
arXiv Detail & Related papers (2020-11-09T17:15:03Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.