D4Explainer: In-Distribution GNN Explanations via Discrete Denoising
Diffusion
- URL: http://arxiv.org/abs/2310.19321v1
- Date: Mon, 30 Oct 2023 07:41:42 GMT
- Title: D4Explainer: In-Distribution GNN Explanations via Discrete Denoising
Diffusion
- Authors: Jialin Chen, Shirley Wu, Abhijit Gupta, Rex Ying
- Abstract summary: Graph Neural Networks (GNNs) play a vital role in model auditing and ensuring trustworthy graph learning.
D4Explainer is a novel approach that provides in-distribution GNN explanations for both counterfactual and model-level explanation scenarios.
It is the first unified framework that combines both counterfactual and model-level explanations.
- Score: 12.548966346327349
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The widespread deployment of Graph Neural Networks (GNNs) sparks significant
interest in their explainability, which plays a vital role in model auditing
and ensuring trustworthy graph learning. The objective of GNN explainability is
to discern the underlying graph structures that have the most significant
impact on model predictions. Ensuring that explanations generated are reliable
necessitates consideration of the in-distribution property, particularly due to
the vulnerability of GNNs to out-of-distribution data. Unfortunately,
prevailing explainability methods tend to constrain the generated explanations
to the structure of the original graph, thereby downplaying the significance of
the in-distribution property and resulting in explanations that lack
reliability. To address these challenges, we propose D4Explainer, a novel
approach that provides in-distribution GNN explanations for both counterfactual
and model-level explanation scenarios. The proposed D4Explainer incorporates
generative graph distribution learning into the optimization objective, which
accomplishes two goals: 1) generate a collection of diverse counterfactual
graphs that conform to the in-distribution property for a given instance, and
2) identify the most discriminative graph patterns that contribute to a
specific class prediction, thus serving as model-level explanations. It is
worth mentioning that D4Explainer is the first unified framework that combines
both counterfactual and model-level explanations. Empirical evaluations
conducted on synthetic and real-world datasets provide compelling evidence of
the state-of-the-art performance achieved by D4Explainer in terms of
explanation accuracy, faithfulness, diversity, and robustness.
Related papers
- Explaining Hypergraph Neural Networks: From Local Explanations to Global Concepts [18.220099086165394]
We introduce SHypX, the first model-agnostic post-hoc explainer for hypergraph neural networks.
At the instance-level, it performs input attribution by discretely sampling explanation subhypergraphs optimized to be faithful and concise.
At the model-level, it produces global explanation subhypergraphs using unsupervised concept extraction.
arXiv Detail & Related papers (2024-10-10T09:50:28Z) - Factorized Explainer for Graph Neural Networks [7.382632811417645]
Graph Neural Networks (GNNs) have received increasing attention due to their ability to learn from graph-structured data.
Post-hoc instance-level explanation methods have been proposed to understand GNN predictions.
We introduce a novel factorized explanation model with theoretical performance guarantees.
arXiv Detail & Related papers (2023-12-09T15:29:45Z) - MEGAN: Multi-Explanation Graph Attention Network [1.1470070927586016]
We propose a multi-explanation graph attention network (MEGAN)
Unlike existing graph explainability methods, our network can produce node and edge attributional explanations along multiple channels.
Our attention-based network is fully differentiable and explanations can actively be trained in an explanation-supervised manner.
arXiv Detail & Related papers (2022-11-23T16:10:13Z) - RES: A Robust Framework for Guiding Visual Explanation [8.835733039270364]
We propose a framework for guiding visual explanation by developing a novel objective that handles inaccurate boundary, incomplete region, and inconsistent distribution of human annotations.
Experiments on two real-world image datasets demonstrate the effectiveness of the proposed framework on enhancing both the reasonability of the explanation and the performance of the backbones model.
arXiv Detail & Related papers (2022-06-27T16:06:27Z) - Towards Explanation for Unsupervised Graph-Level Representation Learning [108.31036962735911]
Existing explanation methods focus on the supervised settings, eg, node classification and graph classification, while the explanation for unsupervised graph-level representation learning is still unexplored.
In this paper, we advance the Information Bottleneck principle (IB) to tackle the proposed explanation problem for unsupervised graph representations, which leads to a novel principle, textitUnsupervised Subgraph Information Bottleneck (USIB)
We also theoretically analyze the connection between graph representations and explanatory subgraphs on the label space, which reveals that the robustness of representations benefit the fidelity of explanatory subgraphs.
arXiv Detail & Related papers (2022-05-20T02:50:15Z) - Reinforced Causal Explainer for Graph Neural Networks [112.57265240212001]
Explainability is crucial for probing graph neural networks (GNNs)
We propose a reinforcement learning agent, Reinforced Causal Explainer (RC-Explainer)
RC-Explainer generates faithful and concise explanations, and has a better power to unseen graphs.
arXiv Detail & Related papers (2022-04-23T09:13:25Z) - Handling Distribution Shifts on Graphs: An Invariance Perspective [78.31180235269035]
We formulate the OOD problem on graphs and develop a new invariant learning approach, Explore-to-Extrapolate Risk Minimization (EERM)
EERM resorts to multiple context explorers that are adversarially trained to maximize the variance of risks from multiple virtual environments.
We prove the validity of our method by theoretically showing its guarantee of a valid OOD solution.
arXiv Detail & Related papers (2022-02-05T02:31:01Z) - Discovering Invariant Rationales for Graph Neural Networks [104.61908788639052]
Intrinsic interpretability of graph neural networks (GNNs) is to find a small subset of the input graph's features.
We propose a new strategy of discovering invariant rationale (DIR) to construct intrinsically interpretable GNNs.
arXiv Detail & Related papers (2022-01-30T16:43:40Z) - Deconfounding to Explanation Evaluation in Graph Neural Networks [136.73451468551656]
We argue that a distribution shift exists between the full graph and the subgraph, causing the out-of-distribution problem.
We propose Deconfounded Subgraph Evaluation (DSE) which assesses the causal effect of an explanatory subgraph on the model prediction.
arXiv Detail & Related papers (2022-01-21T18:05:00Z) - Parameterized Explainer for Graph Neural Network [49.79917262156429]
We propose PGExplainer, a parameterized explainer for Graph Neural Networks (GNNs)
Compared to the existing work, PGExplainer has better generalization ability and can be utilized in an inductive setting easily.
Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7% relative improvement in AUC on explaining graph classification.
arXiv Detail & Related papers (2020-11-09T17:15:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.