OrphicX: A Causality-Inspired Latent Variable Model for Interpreting
Graph Neural Networks
- URL: http://arxiv.org/abs/2203.15209v1
- Date: Tue, 29 Mar 2022 03:08:33 GMT
- Title: OrphicX: A Causality-Inspired Latent Variable Model for Interpreting
Graph Neural Networks
- Authors: Wanyu Lin, Hao Lan, Hao Wang and Baochun Li
- Abstract summary: This paper proposes a new eXplanation framework, called OrphicX, for generating causal explanations for graph neural networks (GNNs)
We construct a distinct generative model and design an objective function that encourages the generative model to produce causal, compact, and faithful explanations.
We show that OrphicX can effectively identify the causal semantics for generating causal explanations, significantly outperforming its alternatives.
- Score: 42.539085765796976
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper proposes a new eXplanation framework, called OrphicX, for
generating causal explanations for any graph neural networks (GNNs) based on
learned latent causal factors. Specifically, we construct a distinct generative
model and design an objective function that encourages the generative model to
produce causal, compact, and faithful explanations. This is achieved by
isolating the causal factors in the latent space of graphs by maximizing the
information flow measurements. We theoretically analyze the cause-effect
relationships in the proposed causal graph, identify node attributes as
confounders between graphs and GNN predictions, and circumvent such confounder
effect by leveraging the backdoor adjustment formula. Our framework is
compatible with any GNNs, and it does not require access to the process by
which the target GNN produces its predictions. In addition, it does not rely on
the linear-independence assumption of the explained features, nor require prior
knowledge on the graph learning tasks. We show a proof-of-concept of OrphicX on
canonical classification problems on graph data. In particular, we analyze the
explanatory subgraphs obtained from explanations for molecular graphs (i.e.,
Mutag) and quantitatively evaluate the explanation performance with frequently
occurring subgraph patterns. Empirically, we show that OrphicX can effectively
identify the causal semantics for generating causal explanations, significantly
outperforming its alternatives.
Related papers
- Heterophilic Graph Neural Networks Optimization with Causal Message-passing [24.796935814432892]
We use causal inference to capture heterophilic message-passing in Graph Neural Network (GNN)
We propose CausalMP, a causal message-passing discovery network for heterophilic graph learning.
arXiv Detail & Related papers (2024-11-21T03:59:07Z) - Towards Robust Fidelity for Evaluating Explainability of Graph Neural Networks [32.345435955298825]
Graph Neural Networks (GNNs) are neural models that leverage the dependency structure in graphical data via message passing among the graph nodes.
A main challenge in studying GNN explainability is to provide fidelity measures that evaluate the performance of these explanation functions.
This paper studies this foundational challenge, spotlighting the inherent limitations of prevailing fidelity metrics.
arXiv Detail & Related papers (2023-10-03T06:25:14Z) - Structural Explanations for Graph Neural Networks using HSIC [21.929646888419914]
Graph neural networks (GNNs) are a type of neural model that tackle graphical tasks in an end-to-end manner.
The complicated dynamics of GNNs make it difficult to understand which parts of the graph features contribute more strongly to the predictions.
In this study, a flexible model agnostic explanation method is proposed to detect significant structures in graphs.
arXiv Detail & Related papers (2023-02-04T09:46:47Z) - CLEAR: Generative Counterfactual Explanations on Graphs [60.30009215290265]
We study the problem of counterfactual explanation generation on graphs.
A few studies have explored counterfactual explanations on graphs, but many challenges of this problem are still not well-addressed.
We propose a novel framework CLEAR which aims to generate counterfactual explanations on graphs for graph-level prediction models.
arXiv Detail & Related papers (2022-10-16T04:35:32Z) - Deconfounding to Explanation Evaluation in Graph Neural Networks [136.73451468551656]
We argue that a distribution shift exists between the full graph and the subgraph, causing the out-of-distribution problem.
We propose Deconfounded Subgraph Evaluation (DSE) which assesses the causal effect of an explanatory subgraph on the model prediction.
arXiv Detail & Related papers (2022-01-21T18:05:00Z) - Generalizing Graph Neural Networks on Out-Of-Distribution Graphs [51.33152272781324]
Graph Neural Networks (GNNs) are proposed without considering the distribution shifts between training and testing graphs.
In such a setting, GNNs tend to exploit subtle statistical correlations existing in the training set for predictions, even though it is a spurious correlation.
We propose a general causal representation framework, called StableGNN, to eliminate the impact of spurious correlations.
arXiv Detail & Related papers (2021-11-20T18:57:18Z) - Explicit Pairwise Factorized Graph Neural Network for Semi-Supervised
Node Classification [59.06717774425588]
We propose the Explicit Pairwise Factorized Graph Neural Network (EPFGNN), which models the whole graph as a partially observed Markov Random Field.
It contains explicit pairwise factors to model output-output relations and uses a GNN backbone to model input-output relations.
We conduct experiments on various datasets, which shows that our model can effectively improve the performance for semi-supervised node classification on graphs.
arXiv Detail & Related papers (2021-07-27T19:47:53Z) - GraphSVX: Shapley Value Explanations for Graph Neural Networks [81.83769974301995]
Graph Neural Networks (GNNs) achieve significant performance for various learning tasks on geometric data.
In this paper, we propose a unified framework satisfied by most existing GNN explainers.
We introduce GraphSVX, a post hoc local model-agnostic explanation method specifically designed for GNNs.
arXiv Detail & Related papers (2021-04-18T10:40:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.