Faithful Explanations for Deep Graph Models
- URL: http://arxiv.org/abs/2205.11850v1
- Date: Tue, 24 May 2022 07:18:56 GMT
- Title: Faithful Explanations for Deep Graph Models
- Authors: Zifan Wang, Yuhang Yao, Chaoran Zhang, Han Zhang, Youjie Kang, Carlee
Joe-Wong, Matt Fredrikson, Anupam Datta
- Abstract summary: This paper studies faithful explanations for Graph Neural Networks (GNNs)
It applies to existing explanation methods, including feature attributions and subgraph explanations.
Third, we introduce emphk-hop Explanation with a Convolutional Core (KEC), a new explanation method that provably maximizes faithfulness to the original GNN.
- Score: 44.3056871040946
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: This paper studies faithful explanations for Graph Neural Networks (GNNs).
First, we provide a new and general method for formally characterizing the
faithfulness of explanations for GNNs. It applies to existing explanation
methods, including feature attributions and subgraph explanations. Second, our
analytical and empirical results demonstrate that feature attribution methods
cannot capture the nonlinear effect of edge features, while existing subgraph
explanation methods are not faithful. Third, we introduce \emph{k-hop
Explanation with a Convolutional Core} (KEC), a new explanation method that
provably maximizes faithfulness to the original GNN by leveraging information
about the graph structure in its adjacency matrix and its \emph{k-th} power.
Lastly, our empirical results over both synthetic and real-world datasets for
classification and anomaly detection tasks with GNNs demonstrate the
effectiveness of our approach.
Related papers
- GOAt: Explaining Graph Neural Networks via Graph Output Attribution [32.66251068600664]
This paper introduces Graph Output Attribution (GOAt), a novel method to attribute graph outputs to input graph features.
GOAt is faithful, discriminative, as well as stable across similar samples.
We show that our method outperforms various state-ofthe-art GNN explainers in terms of the commonly used fidelity metric.
arXiv Detail & Related papers (2024-01-26T00:32:58Z) - View-based Explanations for Graph Neural Networks [27.19300566616961]
We propose GVEX, a novel paradigm that generates Graph Views for EXplanation.
We show that this strategy provides an approximation ratio of 1/2.
Our second algorithm performs a single-pass to an input node stream in batches to incrementally maintain explanation views.
arXiv Detail & Related papers (2024-01-04T06:20:24Z) - Factorized Explainer for Graph Neural Networks [7.382632811417645]
Graph Neural Networks (GNNs) have received increasing attention due to their ability to learn from graph-structured data.
Post-hoc instance-level explanation methods have been proposed to understand GNN predictions.
We introduce a novel factorized explanation model with theoretical performance guarantees.
arXiv Detail & Related papers (2023-12-09T15:29:45Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Towards Explanation for Unsupervised Graph-Level Representation Learning [108.31036962735911]
Existing explanation methods focus on the supervised settings, eg, node classification and graph classification, while the explanation for unsupervised graph-level representation learning is still unexplored.
In this paper, we advance the Information Bottleneck principle (IB) to tackle the proposed explanation problem for unsupervised graph representations, which leads to a novel principle, textitUnsupervised Subgraph Information Bottleneck (USIB)
We also theoretically analyze the connection between graph representations and explanatory subgraphs on the label space, which reveals that the robustness of representations benefit the fidelity of explanatory subgraphs.
arXiv Detail & Related papers (2022-05-20T02:50:15Z) - Towards a Rigorous Theoretical Analysis and Evaluation of GNN
Explanations [25.954303305216094]
We introduce the first axiomatic framework for theoretically analyzing, evaluating, and comparing state-of-the-art GNN explanation methods.
We leverage these properties to present the first ever theoretical analysis of the effectiveness of state-of-the-art GNN explanation methods.
arXiv Detail & Related papers (2021-06-16T18:38:30Z) - SEEN: Sharpening Explanations for Graph Neural Networks using
Explanations from Neighborhoods [0.0]
We propose a method to improve the explanation quality of node classification tasks through aggregation of auxiliary explanations.
Applying SEEN does not require modification of a graph and can be used with diverse explainability techniques.
Experiments on matching motif-participating nodes from a given graph show great improvement in explanation accuracy of up to 12.71%.
arXiv Detail & Related papers (2021-06-16T03:04:46Z) - GraphSVX: Shapley Value Explanations for Graph Neural Networks [81.83769974301995]
Graph Neural Networks (GNNs) achieve significant performance for various learning tasks on geometric data.
In this paper, we propose a unified framework satisfied by most existing GNN explainers.
We introduce GraphSVX, a post hoc local model-agnostic explanation method specifically designed for GNNs.
arXiv Detail & Related papers (2021-04-18T10:40:37Z) - Parameterized Explainer for Graph Neural Network [49.79917262156429]
We propose PGExplainer, a parameterized explainer for Graph Neural Networks (GNNs)
Compared to the existing work, PGExplainer has better generalization ability and can be utilized in an inductive setting easily.
Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7% relative improvement in AUC on explaining graph classification.
arXiv Detail & Related papers (2020-11-09T17:15:03Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.