Verifying Relational Explanations: A Probabilistic Approach
- URL: http://arxiv.org/abs/2401.02703v1
- Date: Fri, 5 Jan 2024 08:14:51 GMT
- Title: Verifying Relational Explanations: A Probabilistic Approach
- Authors: Abisha Thapa Magar, Anup Shakya, Somdeb Sarkhel, Deepak Venugopal
- Abstract summary: We develop an approach where we assess the uncertainty in explanations generated by GNNExplainer.
We learn a factor graph model to quantify uncertainty in an explanation.
Our results on several datasets show that our approach can help verify explanations from GNNExplainer.
- Score: 2.113770213797994
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explanations on relational data are hard to verify since the explanation
structures are more complex (e.g. graphs). To verify interpretable explanations
(e.g. explanations of predictions made in images, text, etc.), typically human
subjects are used since it does not necessarily require a lot of expertise.
However, to verify the quality of a relational explanation requires expertise
and is hard to scale-up. GNNExplainer is arguably one of the most popular
explanation methods for Graph Neural Networks. In this paper, we develop an
approach where we assess the uncertainty in explanations generated by
GNNExplainer. Specifically, we ask the explainer to generate explanations for
several counterfactual examples. We generate these examples as symmetric
approximations of the relational structure in the original data. From these
explanations, we learn a factor graph model to quantify uncertainty in an
explanation. Our results on several datasets show that our approach can help
verify explanations from GNNExplainer by reliably estimating the uncertainty of
a relation specified in the explanation.
Related papers
- GANExplainer: GAN-based Graph Neural Networks Explainer [5.641321839562139]
It is critical to explain why graph neural network (GNN) makes particular predictions for them to be believed in many applications.
We propose GANExplainer, based on Generative Adversarial Network (GAN) architecture.
GANExplainer improves explanation accuracy by up to 35% compared to its alternatives.
arXiv Detail & Related papers (2022-12-30T23:11:24Z) - MEGAN: Multi-Explanation Graph Attention Network [1.1470070927586016]
We propose a multi-explanation graph attention network (MEGAN)
Unlike existing graph explainability methods, our network can produce node and edge attributional explanations along multiple channels.
Our attention-based network is fully differentiable and explanations can actively be trained in an explanation-supervised manner.
arXiv Detail & Related papers (2022-11-23T16:10:13Z) - Toward Multiple Specialty Learners for Explaining GNNs via Online
Knowledge Distillation [0.17842332554022688]
Graph Neural Networks (GNNs) have become increasingly ubiquitous in numerous applications and systems, necessitating explanations of their predictions.
We propose a novel GNN explanation framework named SCALE, which is general and fast for explaining predictions.
In training, a black-box GNN model guides learners based on an online knowledge distillation paradigm.
Specifically, edge masking and random walk with restart procedures are executed to provide structural explanations for graph-level and node-level predictions.
arXiv Detail & Related papers (2022-10-20T08:44:57Z) - CLEAR: Generative Counterfactual Explanations on Graphs [60.30009215290265]
We study the problem of counterfactual explanation generation on graphs.
A few studies have explored counterfactual explanations on graphs, but many challenges of this problem are still not well-addressed.
We propose a novel framework CLEAR which aims to generate counterfactual explanations on graphs for graph-level prediction models.
arXiv Detail & Related papers (2022-10-16T04:35:32Z) - Reinforced Causal Explainer for Graph Neural Networks [112.57265240212001]
Explainability is crucial for probing graph neural networks (GNNs)
We propose a reinforcement learning agent, Reinforced Causal Explainer (RC-Explainer)
RC-Explainer generates faithful and concise explanations, and has a better power to unseen graphs.
arXiv Detail & Related papers (2022-04-23T09:13:25Z) - Deconfounding to Explanation Evaluation in Graph Neural Networks [136.73451468551656]
We argue that a distribution shift exists between the full graph and the subgraph, causing the out-of-distribution problem.
We propose Deconfounded Subgraph Evaluation (DSE) which assesses the causal effect of an explanatory subgraph on the model prediction.
arXiv Detail & Related papers (2022-01-21T18:05:00Z) - ExplaGraphs: An Explanation Graph Generation Task for Structured
Commonsense Reasoning [65.15423587105472]
We present a new generative and structured commonsense-reasoning task (and an associated dataset) of explanation graph generation for stance prediction.
Specifically, given a belief and an argument, a model has to predict whether the argument supports or counters the belief and also generate a commonsense-augmented graph that serves as non-trivial, complete, and unambiguous explanation for the predicted stance.
A significant 83% of our graphs contain external commonsense nodes with diverse structures and reasoning depths.
arXiv Detail & Related papers (2021-04-15T17:51:36Z) - Parameterized Explainer for Graph Neural Network [49.79917262156429]
We propose PGExplainer, a parameterized explainer for Graph Neural Networks (GNNs)
Compared to the existing work, PGExplainer has better generalization ability and can be utilized in an inductive setting easily.
Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7% relative improvement in AUC on explaining graph classification.
arXiv Detail & Related papers (2020-11-09T17:15:03Z) - PGM-Explainer: Probabilistic Graphical Model Explanations for Graph
Neural Networks [27.427529601958334]
We propose PGM-Explainer, a Probabilistic Graphical Model (PGM) model-agnostic explainer for Graph Neural Networks (GNNs)
Given a prediction to be explained, PGM-Explainer identifies crucial graph components and generates an explanation in form of a PGM approximating that prediction.
Our experiments on both synthetic and real-world datasets show that PGM-Explainer achieves better performance than existing explainers in many benchmark tasks.
arXiv Detail & Related papers (2020-10-12T15:33:13Z) - The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal
Sufficient Subsets [61.66584140190247]
We show that feature-based explanations pose problems even for explaining trivial models.
We show that two popular classes of explainers, Shapley explainers and minimal sufficient subsets explainers, target fundamentally different types of ground-truth explanations.
arXiv Detail & Related papers (2020-09-23T09:45:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.