Deconfounding to Explanation Evaluation in Graph Neural Networks
- URL: http://arxiv.org/abs/2201.08802v1
- Date: Fri, 21 Jan 2022 18:05:00 GMT
- Title: Deconfounding to Explanation Evaluation in Graph Neural Networks
- Authors: Ying-Xin (Shirley) Wu, Xiang Wang, An Zhang, Xia Hu, Fuli Feng,
Xiangnan He, Tat-Seng Chua
- Abstract summary: We argue that a distribution shift exists between the full graph and the subgraph, causing the out-of-distribution problem.
We propose Deconfounded Subgraph Evaluation (DSE) which assesses the causal effect of an explanatory subgraph on the model prediction.
- Score: 136.73451468551656
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explainability of graph neural networks (GNNs) aims to answer ``Why the GNN
made a certain prediction?'', which is crucial to interpret the model
prediction. The feature attribution framework distributes a GNN's prediction to
its input features (e.g., edges), identifying an influential subgraph as the
explanation. When evaluating the explanation (i.e., subgraph importance), a
standard way is to audit the model prediction based on the subgraph solely.
However, we argue that a distribution shift exists between the full graph and
the subgraph, causing the out-of-distribution problem. Furthermore, with an
in-depth causal analysis, we find the OOD effect acts as the confounder, which
brings spurious associations between the subgraph importance and model
prediction, making the evaluation less reliable. In this work, we propose
Deconfounded Subgraph Evaluation (DSE) which assesses the causal effect of an
explanatory subgraph on the model prediction. While the distribution shift is
generally intractable, we employ the front-door adjustment and introduce a
surrogate variable of the subgraphs. Specifically, we devise a generative model
to generate the plausible surrogates that conform to the data distribution,
thus approaching the unbiased estimation of subgraph importance. Empirical
results demonstrate the effectiveness of DSE in terms of explanation fidelity.
Related papers
- Identifiable Latent Neural Causal Models [82.14087963690561]
Causal representation learning seeks to uncover latent, high-level causal representations from low-level observed data.
We determine the types of distribution shifts that do contribute to the identifiability of causal representations.
We translate our findings into a practical algorithm, allowing for the acquisition of reliable latent causal representations.
arXiv Detail & Related papers (2024-03-23T04:13:55Z) - Faithful and Consistent Graph Neural Network Explanations with Rationale
Alignment [38.66324833510402]
Instance-level GNN explanation aims to discover critical input elements, like nodes or edges, that the target GNN relies upon for making predictions.
Various algorithms are proposed, most of them formalize this task by searching the minimal subgraph which can preserve original predictions.
Several subgraphs can result in the same or similar outputs as the original graphs.
Applying them to explain weakly-performed GNNs would further amplify these issues.
arXiv Detail & Related papers (2023-01-07T06:33:35Z) - On Consistency in Graph Neural Network Interpretation [34.25952902469481]
Instance-level GNN explanation aims to discover critical input elements, like nodes or edges, that the target GNN relies upon for making predictions.
Various algorithms are proposed, but most of them formalize this task by searching the minimal subgraph.
We propose a simple yet effective countermeasure by aligning embeddings.
arXiv Detail & Related papers (2022-05-27T02:58:07Z) - Reinforced Causal Explainer for Graph Neural Networks [112.57265240212001]
Explainability is crucial for probing graph neural networks (GNNs)
We propose a reinforcement learning agent, Reinforced Causal Explainer (RC-Explainer)
RC-Explainer generates faithful and concise explanations, and has a better power to unseen graphs.
arXiv Detail & Related papers (2022-04-23T09:13:25Z) - OrphicX: A Causality-Inspired Latent Variable Model for Interpreting
Graph Neural Networks [42.539085765796976]
This paper proposes a new eXplanation framework, called OrphicX, for generating causal explanations for graph neural networks (GNNs)
We construct a distinct generative model and design an objective function that encourages the generative model to produce causal, compact, and faithful explanations.
We show that OrphicX can effectively identify the causal semantics for generating causal explanations, significantly outperforming its alternatives.
arXiv Detail & Related papers (2022-03-29T03:08:33Z) - Discovering Invariant Rationales for Graph Neural Networks [104.61908788639052]
Intrinsic interpretability of graph neural networks (GNNs) is to find a small subset of the input graph's features.
We propose a new strategy of discovering invariant rationale (DIR) to construct intrinsically interpretable GNNs.
arXiv Detail & Related papers (2022-01-30T16:43:40Z) - Towards the Explanation of Graph Neural Networks in Digital Pathology
with Information Flows [67.23405590815602]
Graph Neural Networks (GNNs) are widely adopted in digital pathology.
Existing explainers discover an explanatory subgraph relevant to the prediction.
An explanatory subgraph should be not only necessary for prediction, but also sufficient to uncover the most predictive regions.
We propose IFEXPLAINER, which generates a necessary and sufficient explanation for GNNs.
arXiv Detail & Related papers (2021-12-18T10:19:01Z) - Generalizing Graph Neural Networks on Out-Of-Distribution Graphs [51.33152272781324]
Graph Neural Networks (GNNs) are proposed without considering the distribution shifts between training and testing graphs.
In such a setting, GNNs tend to exploit subtle statistical correlations existing in the training set for predictions, even though it is a spurious correlation.
We propose a general causal representation framework, called StableGNN, to eliminate the impact of spurious correlations.
arXiv Detail & Related papers (2021-11-20T18:57:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.