Empowering Counterfactual Reasoning over Graph Neural Networks through
Inductivity
- URL: http://arxiv.org/abs/2306.04835v1
- Date: Wed, 7 Jun 2023 23:40:18 GMT
- Title: Empowering Counterfactual Reasoning over Graph Neural Networks through
Inductivity
- Authors: Samidha Verma, Burouj Armgaan, Sourav Medya, Sayan Ranu
- Abstract summary: Graph neural networks (GNNs) have various practical applications, such as drug discovery, recommendation engines, and chip design.
Counterfactual reasoning is used to make minimal changes to the input graph of a GNN in order to alter its prediction.
- Score: 7.094238868711952
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks (GNNs) have various practical applications, such as
drug discovery, recommendation engines, and chip design. However, GNNs lack
transparency as they cannot provide understandable explanations for their
predictions. To address this issue, counterfactual reasoning is used. The main
goal is to make minimal changes to the input graph of a GNN in order to alter
its prediction. While several algorithms have been proposed for counterfactual
explanations of GNNs, most of them have two main drawbacks. Firstly, they only
consider edge deletions as perturbations. Secondly, the counterfactual
explanation models are transductive, meaning they do not generalize to unseen
data. In this study, we introduce an inductive algorithm called INDUCE, which
overcomes these limitations. By conducting extensive experiments on several
datasets, we demonstrate that incorporating edge additions leads to better
counterfactual results compared to the existing methods. Moreover, the
inductive modeling approach allows INDUCE to directly predict counterfactual
perturbations without requiring instance-specific training. This results in
significant computational speed improvements compared to baseline methods and
enables scalable counterfactual analysis for GNNs.
Related papers
- Incorporating Retrieval-based Causal Learning with Information
Bottlenecks for Interpretable Graph Neural Networks [12.892400744247565]
We develop a novel interpretable causal GNN framework that incorporates retrieval-based causal learning with Graph Information Bottleneck (GIB) theory.
We achieve 32.71% higher precision on real-world explanation scenarios with diverse explanation types.
arXiv Detail & Related papers (2024-02-07T09:57:39Z) - Learning to Reweight for Graph Neural Network [63.978102332612906]
Graph Neural Networks (GNNs) show promising results for graph tasks.
Existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data.
We propose a novel nonlinear graph decorrelation method, which can substantially improve the out-of-distribution generalization ability.
arXiv Detail & Related papers (2023-12-19T12:25:10Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - On Consistency in Graph Neural Network Interpretation [34.25952902469481]
Instance-level GNN explanation aims to discover critical input elements, like nodes or edges, that the target GNN relies upon for making predictions.
Various algorithms are proposed, but most of them formalize this task by searching the minimal subgraph.
We propose a simple yet effective countermeasure by aligning embeddings.
arXiv Detail & Related papers (2022-05-27T02:58:07Z) - Discovering Invariant Rationales for Graph Neural Networks [104.61908788639052]
Intrinsic interpretability of graph neural networks (GNNs) is to find a small subset of the input graph's features.
We propose a new strategy of discovering invariant rationale (DIR) to construct intrinsically interpretable GNNs.
arXiv Detail & Related papers (2022-01-30T16:43:40Z) - Robust Counterfactual Explanations on Graph Neural Networks [42.91881080506145]
Massive deployment of Graph Neural Networks (GNNs) in high-stake applications generates a strong demand for explanations that are robust to noise.
Most existing methods generate explanations by identifying a subgraph of an input graph that has a strong correlation with the prediction.
We propose a novel method to generate robust counterfactual explanations on GNNs by explicitly modelling the common decision logic of GNNs on similar input graphs.
arXiv Detail & Related papers (2021-07-08T19:50:00Z) - CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks [40.47070962945751]
Graph neural networks (GNNs) have shown increasing promise in real-world applications.
We propose CF-GNNExplainer: the first method for generating counterfactual explanations for GNNs.
arXiv Detail & Related papers (2021-02-05T17:58:14Z) - Interpreting Graph Neural Networks for NLP With Differentiable Edge
Masking [63.49779304362376]
Graph neural networks (GNNs) have become a popular approach to integrating structural inductive biases into NLP models.
We introduce a post-hoc method for interpreting the predictions of GNNs which identifies unnecessary edges.
We show that we can drop a large proportion of edges without deteriorating the performance of the model.
arXiv Detail & Related papers (2020-10-01T17:51:19Z) - Fast Learning of Graph Neural Networks with Guaranteed Generalizability:
One-hidden-layer Case [93.37576644429578]
Graph neural networks (GNNs) have made great progress recently on learning from graph-structured data in practice.
We provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.
arXiv Detail & Related papers (2020-06-25T00:45:52Z) - Optimization and Generalization Analysis of Transduction through
Gradient Boosting and Application to Multi-scale Graph Neural Networks [60.22494363676747]
It is known that the current graph neural networks (GNNs) are difficult to make themselves deep due to the problem known as over-smoothing.
Multi-scale GNNs are a promising approach for mitigating the over-smoothing problem.
We derive the optimization and generalization guarantees of transductive learning algorithms that include multi-scale GNNs.
arXiv Detail & Related papers (2020-06-15T17:06:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.