GRAPHSHAP: Explaining Identity-Aware Graph Classifiers Through the
Language of Motifs
- URL: http://arxiv.org/abs/2202.08815v2
- Date: Fri, 7 Jul 2023 12:53:40 GMT
- Title: GRAPHSHAP: Explaining Identity-Aware Graph Classifiers Through the
Language of Motifs
- Authors: Alan Perotti, Paolo Bajardi, Francesco Bonchi, and Andr\'e Panisson
- Abstract summary: GRAPHSHAP is able to provide motif-based explanations for identity-aware graph classifiers.
We show how a simple kernel can efficiently approximate explanation scores, thus allowing GRAPHSHAP to scale on scenarios with a large explanation space.
Our experiments highlight how the classification provided by a black-box model can be effectively explained by few connectomics patterns.
- Score: 11.453325862543094
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most methods for explaining black-box classifiers (e.g. on tabular data,
images, or time series) rely on measuring the impact that removing/perturbing
features has on the model output. This forces the explanation language to match
the classifier's feature space. However, when dealing with graph data, in which
the basic features correspond to the edges describing the graph structure, this
matching between features space and explanation language might not be
appropriate. Decoupling the feature space (edges) from a desired high-level
explanation language (such as motifs) is thus a major challenge towards
developing actionable explanations for graph classification tasks. In this
paper we introduce GRAPHSHAP, a Shapley-based approach able to provide
motif-based explanations for identity-aware graph classifiers, assuming no
knowledge whatsoever about the model or its training data: the only requirement
is that the classifier can be queried as a black-box at will. For the sake of
computational efficiency we explore a progressive approximation strategy and
show how a simple kernel can efficiently approximate explanation scores, thus
allowing GRAPHSHAP to scale on scenarios with a large explanation space (i.e.
large number of motifs). We showcase GRAPHSHAP on a real-world brain-network
dataset consisting of patients affected by Autism Spectrum Disorder and a
control group. Our experiments highlight how the classification provided by a
black-box model can be effectively explained by few connectomics patterns.
Related papers
- Explaining Graph Neural Networks for Node Similarity on Graphs [9.14795454299225]
We investigate how GNN-based methods for computing node similarities can be augmented with explanations.
Specifically, we evaluate the performance of two approaches towards explanations in GNNs.
We find that unlike MI explanations, gradient-based explanations have three desirable properties.
arXiv Detail & Related papers (2024-07-10T13:20:47Z) - Structural Node Embeddings with Homomorphism Counts [2.0131144893314232]
homomorphism counts capture local structural information.
We experimentally show the effectiveness of homomorphism count based node embeddings.
Our approach capitalises on the efficient computability of graph homomorphism counts for bounded treewidth graph classes.
arXiv Detail & Related papers (2023-08-29T13:14:53Z) - Robust Ante-hoc Graph Explainer using Bilevel Optimization [0.7999703756441758]
We propose RAGE, a novel and flexible ante-hoc explainer for graph neural networks.
RAGE can effectively identify molecular substructures that contain the full information needed for prediction.
Our experiments on various molecular classification tasks show that RAGE explanations are better than existing post-hoc and ante-hoc approaches.
arXiv Detail & Related papers (2023-05-25T05:50:38Z) - Probing Graph Representations [77.7361299039905]
We use a probing framework to quantify the amount of meaningful information captured in graph representations.
Our findings on molecular datasets show the potential of probing for understanding the inductive biases of graph-based models.
We advocate for probing as a useful diagnostic tool for evaluating graph-based models.
arXiv Detail & Related papers (2023-03-07T14:58:18Z) - Rethinking Explaining Graph Neural Networks via Non-parametric Subgraph
Matching [68.35685422301613]
We propose a novel non-parametric subgraph matching framework, dubbed MatchExplainer, to explore explanatory subgraphs.
It couples the target graph with other counterpart instances and identifies the most crucial joint substructure by minimizing the node corresponding-based distance.
Experiments on synthetic and real-world datasets show the effectiveness of our MatchExplainer by outperforming all state-of-the-art parametric baselines with significant margins.
arXiv Detail & Related papers (2023-01-07T05:14:45Z) - PAGE: Prototype-Based Model-Level Explanations for Graph Neural Networks [12.16789930553124]
Prototype-bAsed GNN-Explainer (Page) is a novel model-level explanation method for graph classification.
Page discovers a common subgraph pattern by iteratively searching for high matching nodes.
Using six graph classification datasets, we demonstrate that PAGE qualitatively and quantitatively outperforms the state-of-the-art model-level explanation method.
arXiv Detail & Related papers (2022-10-31T09:10:06Z) - CLEAR: Generative Counterfactual Explanations on Graphs [60.30009215290265]
We study the problem of counterfactual explanation generation on graphs.
A few studies have explored counterfactual explanations on graphs, but many challenges of this problem are still not well-addressed.
We propose a novel framework CLEAR which aims to generate counterfactual explanations on graphs for graph-level prediction models.
arXiv Detail & Related papers (2022-10-16T04:35:32Z) - GLANCE: Global to Local Architecture-Neutral Concept-based Explanations [26.76139301708958]
We propose a novel twin-surrogate explainability framework to explain the decisions made by any CNN-based image classifier.
We first disentangle latent features from the classifier, followed by aligning these features to observed/human-defined context' features.
These aligned features form semantically meaningful concepts that are used for extracting a causal graph depicting the perceived' data-generating process.
We provide a generator to visualize the effect' of interactions among features in latent space and draw feature importance therefrom as local explanations.
arXiv Detail & Related papers (2022-07-05T09:52:09Z) - Reinforced Causal Explainer for Graph Neural Networks [112.57265240212001]
Explainability is crucial for probing graph neural networks (GNNs)
We propose a reinforcement learning agent, Reinforced Causal Explainer (RC-Explainer)
RC-Explainer generates faithful and concise explanations, and has a better power to unseen graphs.
arXiv Detail & Related papers (2022-04-23T09:13:25Z) - ExplaGraphs: An Explanation Graph Generation Task for Structured
Commonsense Reasoning [65.15423587105472]
We present a new generative and structured commonsense-reasoning task (and an associated dataset) of explanation graph generation for stance prediction.
Specifically, given a belief and an argument, a model has to predict whether the argument supports or counters the belief and also generate a commonsense-augmented graph that serves as non-trivial, complete, and unambiguous explanation for the predicted stance.
A significant 83% of our graphs contain external commonsense nodes with diverse structures and reasoning depths.
arXiv Detail & Related papers (2021-04-15T17:51:36Z) - Parameterized Explainer for Graph Neural Network [49.79917262156429]
We propose PGExplainer, a parameterized explainer for Graph Neural Networks (GNNs)
Compared to the existing work, PGExplainer has better generalization ability and can be utilized in an inductive setting easily.
Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7% relative improvement in AUC on explaining graph classification.
arXiv Detail & Related papers (2020-11-09T17:15:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.