Zorro: Valid, Sparse, and Stable Explanations in Graph Neural Networks
- URL: http://arxiv.org/abs/2105.08621v1
- Date: Tue, 18 May 2021 15:53:09 GMT
- Title: Zorro: Valid, Sparse, and Stable Explanations in Graph Neural Networks
- Authors: Thorben Funke, Megha Khosla, Avishek Anand
- Abstract summary: We find that the previous explanation generation approaches that maximize the mutual information between the label distribution produced by the GNN model and the explanation to be restrictive.
Specifically, existing approaches do not enforce explanations to be predictive, sparse, or robust to input perturbations.
We propose a novel approach Zorro based on the principles from rate-distortion theory that uses a simple procedure to optimize for fidelity.
- Score: 6.004582130591279
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the ever-increasing popularity and applications of graph neural
networks, several proposals have been made to interpret and understand the
decisions of a GNN model. Explanations for a GNN model differ in principle from
other input settings. It is important to attribute the decision to input
features and other related instances connected by the graph structure. We find
that the previous explanation generation approaches that maximize the mutual
information between the label distribution produced by the GNN model and the
explanation to be restrictive. Specifically, existing approaches do not enforce
explanations to be predictive, sparse, or robust to input perturbations.
In this paper, we lay down some of the fundamental principles that an
explanation method for GNNs should follow and introduce a metric fidelity as a
measure of the explanation's effectiveness. We propose a novel approach Zorro
based on the principles from rate-distortion theory that uses a simple
combinatorial procedure to optimize for fidelity. Extensive experiments on real
and synthetic datasets reveal that Zorro produces sparser, stable, and more
faithful explanations than existing GNN explanation approaches.
Related papers
- Explainable Graph Neural Networks Under Fire [69.15708723429307]
Graph neural networks (GNNs) usually lack interpretability due to their complex computational behavior and the abstract nature of graphs.
Most GNN explanation methods work in a post-hoc manner and provide explanations in the form of a small subset of important edges and/or nodes.
In this paper we demonstrate that these explanations can unfortunately not be trusted, as common GNN explanation methods turn out to be highly susceptible to adversarial perturbations.
arXiv Detail & Related papers (2024-06-10T16:09:16Z) - Factorized Explainer for Graph Neural Networks [7.382632811417645]
Graph Neural Networks (GNNs) have received increasing attention due to their ability to learn from graph-structured data.
Post-hoc instance-level explanation methods have been proposed to understand GNN predictions.
We introduce a novel factorized explanation model with theoretical performance guarantees.
arXiv Detail & Related papers (2023-12-09T15:29:45Z) - ACGAN-GNNExplainer: Auxiliary Conditional Generative Explainer for Graph
Neural Networks [7.077341403454516]
Graph neural networks (GNNs) have proven their efficacy in a variety of real-world applications, but their underlying mechanisms remain a mystery.
To address this challenge and enable reliable decision-making, many GNN explainers have been proposed in recent years.
We introduce Auxiliary Generative Adrative Network (ACGAN) into the field of GNN explanation and propose a new GNN explainer dubbedemphACGANGNNExplainer.
arXiv Detail & Related papers (2023-09-29T01:20:28Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - FlowX: Towards Explainable Graph Neural Networks via Message Flows [59.025023020402365]
We investigate the explainability of graph neural networks (GNNs) as a step toward elucidating their working mechanisms.
We propose a novel method here, known as FlowX, to explain GNNs by identifying important message flows.
We then propose an information-controlled learning algorithm to train flow scores toward diverse explanation targets.
arXiv Detail & Related papers (2022-06-26T22:48:15Z) - On Consistency in Graph Neural Network Interpretation [34.25952902469481]
Instance-level GNN explanation aims to discover critical input elements, like nodes or edges, that the target GNN relies upon for making predictions.
Various algorithms are proposed, but most of them formalize this task by searching the minimal subgraph.
We propose a simple yet effective countermeasure by aligning embeddings.
arXiv Detail & Related papers (2022-05-27T02:58:07Z) - Reinforced Causal Explainer for Graph Neural Networks [112.57265240212001]
Explainability is crucial for probing graph neural networks (GNNs)
We propose a reinforcement learning agent, Reinforced Causal Explainer (RC-Explainer)
RC-Explainer generates faithful and concise explanations, and has a better power to unseen graphs.
arXiv Detail & Related papers (2022-04-23T09:13:25Z) - Towards the Explanation of Graph Neural Networks in Digital Pathology
with Information Flows [67.23405590815602]
Graph Neural Networks (GNNs) are widely adopted in digital pathology.
Existing explainers discover an explanatory subgraph relevant to the prediction.
An explanatory subgraph should be not only necessary for prediction, but also sufficient to uncover the most predictive regions.
We propose IFEXPLAINER, which generates a necessary and sufficient explanation for GNNs.
arXiv Detail & Related papers (2021-12-18T10:19:01Z) - Robust Counterfactual Explanations on Graph Neural Networks [42.91881080506145]
Massive deployment of Graph Neural Networks (GNNs) in high-stake applications generates a strong demand for explanations that are robust to noise.
Most existing methods generate explanations by identifying a subgraph of an input graph that has a strong correlation with the prediction.
We propose a novel method to generate robust counterfactual explanations on GNNs by explicitly modelling the common decision logic of GNNs on similar input graphs.
arXiv Detail & Related papers (2021-07-08T19:50:00Z) - SEEN: Sharpening Explanations for Graph Neural Networks using
Explanations from Neighborhoods [0.0]
We propose a method to improve the explanation quality of node classification tasks through aggregation of auxiliary explanations.
Applying SEEN does not require modification of a graph and can be used with diverse explainability techniques.
Experiments on matching motif-participating nodes from a given graph show great improvement in explanation accuracy of up to 12.71%.
arXiv Detail & Related papers (2021-06-16T03:04:46Z) - Parameterized Explainer for Graph Neural Network [49.79917262156429]
We propose PGExplainer, a parameterized explainer for Graph Neural Networks (GNNs)
Compared to the existing work, PGExplainer has better generalization ability and can be utilized in an inductive setting easily.
Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7% relative improvement in AUC on explaining graph classification.
arXiv Detail & Related papers (2020-11-09T17:15:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.