Global Explainability of GNNs via Logic Combination of Learned Concepts
- URL: http://arxiv.org/abs/2210.07147v3
- Date: Tue, 11 Apr 2023 18:15:20 GMT
- Title: Global Explainability of GNNs via Logic Combination of Learned Concepts
- Authors: Steve Azzolin, Antonio Longa, Pietro Barbiero, Pietro Li\`o, Andrea
Passerini
- Abstract summary: We propose GLGExplainer (Global Logic-based GNN Explainer), the first Global Explainer capable of generating explanations as arbitrary combinations of learned graphical concepts.
GLGExplainer provides accurate and human-interpretable global explanations that are perfectly aligned with ground-truth explanations.
Extracted formulas are faithful to the model predictions, to the point of providing insights into some occasionally incorrect rules learned by the model.
- Score: 11.724402780594257
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While instance-level explanation of GNN is a well-studied problem with plenty
of approaches being developed, providing a global explanation for the behaviour
of a GNN is much less explored, despite its potential in interpretability and
debugging. Existing solutions either simply list local explanations for a given
class, or generate a synthetic prototypical graph with maximal score for a
given class, completely missing any combinatorial aspect that the GNN could
have learned. In this work, we propose GLGExplainer (Global Logic-based GNN
Explainer), the first Global Explainer capable of generating explanations as
arbitrary Boolean combinations of learned graphical concepts. GLGExplainer is a
fully differentiable architecture that takes local explanations as inputs and
combines them into a logic formula over graphical concepts, represented as
clusters of local explanations. Contrary to existing solutions, GLGExplainer
provides accurate and human-interpretable global explanations that are
perfectly aligned with ground-truth explanations (on synthetic data) or match
existing domain knowledge (on real-world data). Extracted formulas are faithful
to the model predictions, to the point of providing insights into some
occasionally incorrect rules learned by the model, making GLGExplainer a
promising diagnostic tool for learned GNNs.
Related papers
- Global Graph Counterfactual Explanation: A Subgraph Mapping Approach [54.42907350881448]
Graph Neural Networks (GNNs) have been widely deployed in various real-world applications.
Counterfactual explanation aims to find minimum perturbations on input graphs that change the GNN predictions.
We propose GlobalGCE, a novel global-level graph counterfactual explanation method.
arXiv Detail & Related papers (2024-10-25T21:39:05Z) - Explainable Graph Neural Networks Under Fire [69.15708723429307]
Graph neural networks (GNNs) usually lack interpretability due to their complex computational behavior and the abstract nature of graphs.
Most GNN explanation methods work in a post-hoc manner and provide explanations in the form of a small subset of important edges and/or nodes.
In this paper we demonstrate that these explanations can unfortunately not be trusted, as common GNN explanation methods turn out to be highly susceptible to adversarial perturbations.
arXiv Detail & Related papers (2024-06-10T16:09:16Z) - Incorporating Retrieval-based Causal Learning with Information
Bottlenecks for Interpretable Graph Neural Networks [12.892400744247565]
We develop a novel interpretable causal GNN framework that incorporates retrieval-based causal learning with Graph Information Bottleneck (GIB) theory.
We achieve 32.71% higher precision on real-world explanation scenarios with diverse explanation types.
arXiv Detail & Related papers (2024-02-07T09:57:39Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Task-Agnostic Graph Explanations [50.17442349253348]
Graph Neural Networks (GNNs) have emerged as powerful tools to encode graph structured data.
Existing learning-based GNN explanation approaches are task-specific in training.
We propose a Task-Agnostic GNN Explainer (TAGE) trained under self-supervision with no knowledge of downstream tasks.
arXiv Detail & Related papers (2022-02-16T21:11:47Z) - A Meta-Learning Approach for Training Explainable Graph Neural Networks [10.11960004698409]
We propose a meta-learning framework for improving the level of explainability of a GNN directly at training time.
Our framework jointly trains a model to solve the original task, e.g., node classification, and to provide easily processable outputs for downstream algorithms.
Our model-agnostic approach can improve the explanations produced for different GNN architectures and use any instance-based explainer to drive this process.
arXiv Detail & Related papers (2021-09-20T11:09:10Z) - Generative Causal Explanations for Graph Neural Networks [39.60333255875979]
Gem is a model-agnostic approach for providing interpretable explanations for any GNNs on various graph learning tasks.
It achieves a relative increase of the explanation accuracy by up to $30%$ and speeds up the explanation process by up to $110times$ as compared to its state-of-the-art alternatives.
arXiv Detail & Related papers (2021-04-14T06:22:21Z) - Parameterized Explainer for Graph Neural Network [49.79917262156429]
We propose PGExplainer, a parameterized explainer for Graph Neural Networks (GNNs)
Compared to the existing work, PGExplainer has better generalization ability and can be utilized in an inductive setting easily.
Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7% relative improvement in AUC on explaining graph classification.
arXiv Detail & Related papers (2020-11-09T17:15:03Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.