Learning and Evaluating Graph Neural Network Explanations based on
Counterfactual and Factual Reasoning
- URL: http://arxiv.org/abs/2202.08816v1
- Date: Thu, 17 Feb 2022 18:30:45 GMT
- Title: Learning and Evaluating Graph Neural Network Explanations based on
Counterfactual and Factual Reasoning
- Authors: Juntao Tan, Shijie Geng, Zuohui Fu, Yingqiang Ge, Shuyuan Xu, Yunqi
Li, Yongfeng Zhang
- Abstract summary: Graph Neural Networks (GNNs) have shown great advantages on learning representations for structural data.
In this paper, we take insights of Counterfactual and Factual (CF2) reasoning from causal inference theory, to solve both the learning and evaluation problems.
For quantitatively evaluating the generated explanations without the requirement of ground-truth, we design metrics based on Counterfactual and Factual reasoning.
- Score: 46.20269166675735
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Structural data well exists in Web applications, such as social networks in
social media, citation networks in academic websites, and threads data in
online forums. Due to the complex topology, it is difficult to process and make
use of the rich information within such data. Graph Neural Networks (GNNs) have
shown great advantages on learning representations for structural data.
However, the non-transparency of the deep learning models makes it non-trivial
to explain and interpret the predictions made by GNNs. Meanwhile, it is also a
big challenge to evaluate the GNN explanations, since in many cases, the
ground-truth explanations are unavailable.
In this paper, we take insights of Counterfactual and Factual (CF^2)
reasoning from causal inference theory, to solve both the learning and
evaluation problems in explainable GNNs. For generating explanations, we
propose a model-agnostic framework by formulating an optimization problem based
on both of the two casual perspectives. This distinguishes CF^2 from previous
explainable GNNs that only consider one of them. Another contribution of the
work is the evaluation of GNN explanations. For quantitatively evaluating the
generated explanations without the requirement of ground-truth, we design
metrics based on Counterfactual and Factual reasoning to evaluate the necessity
and sufficiency of the explanations. Experiments show that no matter
ground-truth explanations are available or not, CF^2 generates better
explanations than previous state-of-the-art methods on real-world datasets.
Moreover, the statistic analysis justifies the correlation between the
performance on ground-truth evaluation and our proposed metrics.
Related papers
- Incorporating Retrieval-based Causal Learning with Information
Bottlenecks for Interpretable Graph Neural Networks [12.892400744247565]
We develop a novel interpretable causal GNN framework that incorporates retrieval-based causal learning with Graph Information Bottleneck (GIB) theory.
We achieve 32.71% higher precision on real-world explanation scenarios with diverse explanation types.
arXiv Detail & Related papers (2024-02-07T09:57:39Z) - Towards Robust Fidelity for Evaluating Explainability of Graph Neural Networks [32.345435955298825]
Graph Neural Networks (GNNs) are neural models that leverage the dependency structure in graphical data via message passing among the graph nodes.
A main challenge in studying GNN explainability is to provide fidelity measures that evaluate the performance of these explanation functions.
This paper studies this foundational challenge, spotlighting the inherent limitations of prevailing fidelity metrics.
arXiv Detail & Related papers (2023-10-03T06:25:14Z) - Toward Multiple Specialty Learners for Explaining GNNs via Online
Knowledge Distillation [0.17842332554022688]
Graph Neural Networks (GNNs) have become increasingly ubiquitous in numerous applications and systems, necessitating explanations of their predictions.
We propose a novel GNN explanation framework named SCALE, which is general and fast for explaining predictions.
In training, a black-box GNN model guides learners based on an online knowledge distillation paradigm.
Specifically, edge masking and random walk with restart procedures are executed to provide structural explanations for graph-level and node-level predictions.
arXiv Detail & Related papers (2022-10-20T08:44:57Z) - Task-Agnostic Graph Explanations [50.17442349253348]
Graph Neural Networks (GNNs) have emerged as powerful tools to encode graph structured data.
Existing learning-based GNN explanation approaches are task-specific in training.
We propose a Task-Agnostic GNN Explainer (TAGE) trained under self-supervision with no knowledge of downstream tasks.
arXiv Detail & Related papers (2022-02-16T21:11:47Z) - Towards Self-Explainable Graph Neural Network [24.18369781999988]
Graph Neural Networks (GNNs) generalize the deep neural networks to graph-structured data.
GNNs lack explainability, which limits their adoption in scenarios that demand the transparency of models.
We propose a new framework which can find $K$-nearest labeled nodes for each unlabeled node to give explainable node classification.
arXiv Detail & Related papers (2021-08-26T22:45:11Z) - Jointly Attacking Graph Neural Network and its Explanations [50.231829335996814]
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks.
Recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs' prediction by modifying graphs.
We propose a novel attack framework (GEAttack) which can attack both a GNN model and its explanations by simultaneously exploiting their vulnerabilities.
arXiv Detail & Related papers (2021-08-07T07:44:33Z) - GraphSVX: Shapley Value Explanations for Graph Neural Networks [81.83769974301995]
Graph Neural Networks (GNNs) achieve significant performance for various learning tasks on geometric data.
In this paper, we propose a unified framework satisfied by most existing GNN explainers.
We introduce GraphSVX, a post hoc local model-agnostic explanation method specifically designed for GNNs.
arXiv Detail & Related papers (2021-04-18T10:40:37Z) - Generative Causal Explanations for Graph Neural Networks [39.60333255875979]
Gem is a model-agnostic approach for providing interpretable explanations for any GNNs on various graph learning tasks.
It achieves a relative increase of the explanation accuracy by up to $30%$ and speeds up the explanation process by up to $110times$ as compared to its state-of-the-art alternatives.
arXiv Detail & Related papers (2021-04-14T06:22:21Z) - Parameterized Explainer for Graph Neural Network [49.79917262156429]
We propose PGExplainer, a parameterized explainer for Graph Neural Networks (GNNs)
Compared to the existing work, PGExplainer has better generalization ability and can be utilized in an inductive setting easily.
Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7% relative improvement in AUC on explaining graph classification.
arXiv Detail & Related papers (2020-11-09T17:15:03Z) - Fast Learning of Graph Neural Networks with Guaranteed Generalizability:
One-hidden-layer Case [93.37576644429578]
Graph neural networks (GNNs) have made great progress recently on learning from graph-structured data in practice.
We provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.
arXiv Detail & Related papers (2020-06-25T00:45:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.