Reinforced Causal Explainer for Graph Neural Networks
- URL: http://arxiv.org/abs/2204.11028v2
- Date: Wed, 27 Apr 2022 02:45:59 GMT
- Title: Reinforced Causal Explainer for Graph Neural Networks
- Authors: Xiang Wang, Yingxin Wu, An Zhang, Fuli Feng, Xiangnan He, Tat-Seng
Chua
- Abstract summary: Explainability is crucial for probing graph neural networks (GNNs)
We propose a reinforcement learning agent, Reinforced Causal Explainer (RC-Explainer)
RC-Explainer generates faithful and concise explanations, and has a better power to unseen graphs.
- Score: 112.57265240212001
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explainability is crucial for probing graph neural networks (GNNs), answering
questions like "Why the GNN model makes a certain prediction?". Feature
attribution is a prevalent technique of highlighting the explanatory subgraph
in the input graph, which plausibly leads the GNN model to make its prediction.
Various attribution methods exploit gradient-like or attention scores as the
attributions of edges, then select the salient edges with top attribution
scores as the explanation. However, most of these works make an untenable
assumption - the selected edges are linearly independent - thus leaving the
dependencies among edges largely unexplored, especially their coalition effect.
We demonstrate unambiguous drawbacks of this assumption - making the
explanatory subgraph unfaithful and verbose. To address this challenge, we
propose a reinforcement learning agent, Reinforced Causal Explainer
(RC-Explainer). It frames the explanation task as a sequential decision process
- an explanatory subgraph is successively constructed by adding a salient edge
to connect the previously selected subgraph. Technically, its policy network
predicts the action of edge addition, and gets a reward that quantifies the
action's causal effect on the prediction. Such reward accounts for the
dependency of the newly-added edge and the previously-added edges, thus
reflecting whether they collaborate together and form a coalition to pursue
better explanations. As such, RC-Explainer is able to generate faithful and
concise explanations, and has a better generalization power to unseen graphs.
When explaining different GNNs on three graph classification datasets,
RC-Explainer achieves better or comparable performance to SOTA approaches
w.r.t. predictive accuracy and contrastivity, and safely passes sanity checks
and visual inspections. Codes are available at
https://github.com/xiangwang1223/reinforced_causal_explainer.
Related papers
- Faithful and Consistent Graph Neural Network Explanations with Rationale
Alignment [38.66324833510402]
Instance-level GNN explanation aims to discover critical input elements, like nodes or edges, that the target GNN relies upon for making predictions.
Various algorithms are proposed, most of them formalize this task by searching the minimal subgraph which can preserve original predictions.
Several subgraphs can result in the same or similar outputs as the original graphs.
Applying them to explain weakly-performed GNNs would further amplify these issues.
arXiv Detail & Related papers (2023-01-07T06:33:35Z) - Rethinking Explaining Graph Neural Networks via Non-parametric Subgraph
Matching [68.35685422301613]
We propose a novel non-parametric subgraph matching framework, dubbed MatchExplainer, to explore explanatory subgraphs.
It couples the target graph with other counterpart instances and identifies the most crucial joint substructure by minimizing the node corresponding-based distance.
Experiments on synthetic and real-world datasets show the effectiveness of our MatchExplainer by outperforming all state-of-the-art parametric baselines with significant margins.
arXiv Detail & Related papers (2023-01-07T05:14:45Z) - On Consistency in Graph Neural Network Interpretation [34.25952902469481]
Instance-level GNN explanation aims to discover critical input elements, like nodes or edges, that the target GNN relies upon for making predictions.
Various algorithms are proposed, but most of them formalize this task by searching the minimal subgraph.
We propose a simple yet effective countermeasure by aligning embeddings.
arXiv Detail & Related papers (2022-05-27T02:58:07Z) - Deconfounding to Explanation Evaluation in Graph Neural Networks [136.73451468551656]
We argue that a distribution shift exists between the full graph and the subgraph, causing the out-of-distribution problem.
We propose Deconfounded Subgraph Evaluation (DSE) which assesses the causal effect of an explanatory subgraph on the model prediction.
arXiv Detail & Related papers (2022-01-21T18:05:00Z) - Jointly Attacking Graph Neural Network and its Explanations [50.231829335996814]
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks.
Recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs' prediction by modifying graphs.
We propose a novel attack framework (GEAttack) which can attack both a GNN model and its explanations by simultaneously exploiting their vulnerabilities.
arXiv Detail & Related papers (2021-08-07T07:44:33Z) - Robust Counterfactual Explanations on Graph Neural Networks [42.91881080506145]
Massive deployment of Graph Neural Networks (GNNs) in high-stake applications generates a strong demand for explanations that are robust to noise.
Most existing methods generate explanations by identifying a subgraph of an input graph that has a strong correlation with the prediction.
We propose a novel method to generate robust counterfactual explanations on GNNs by explicitly modelling the common decision logic of GNNs on similar input graphs.
arXiv Detail & Related papers (2021-07-08T19:50:00Z) - ExplaGraphs: An Explanation Graph Generation Task for Structured
Commonsense Reasoning [65.15423587105472]
We present a new generative and structured commonsense-reasoning task (and an associated dataset) of explanation graph generation for stance prediction.
Specifically, given a belief and an argument, a model has to predict whether the argument supports or counters the belief and also generate a commonsense-augmented graph that serves as non-trivial, complete, and unambiguous explanation for the predicted stance.
A significant 83% of our graphs contain external commonsense nodes with diverse structures and reasoning depths.
arXiv Detail & Related papers (2021-04-15T17:51:36Z) - Parameterized Explainer for Graph Neural Network [49.79917262156429]
We propose PGExplainer, a parameterized explainer for Graph Neural Networks (GNNs)
Compared to the existing work, PGExplainer has better generalization ability and can be utilized in an inductive setting easily.
Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7% relative improvement in AUC on explaining graph classification.
arXiv Detail & Related papers (2020-11-09T17:15:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.