GraphSVX: Shapley Value Explanations for Graph Neural Networks
- URL: http://arxiv.org/abs/2104.10482v1
- Date: Sun, 18 Apr 2021 10:40:37 GMT
- Title: GraphSVX: Shapley Value Explanations for Graph Neural Networks
- Authors: Alexandre Duval and Fragkiskos D. Malliaros
- Abstract summary: Graph Neural Networks (GNNs) achieve significant performance for various learning tasks on geometric data.
In this paper, we propose a unified framework satisfied by most existing GNN explainers.
We introduce GraphSVX, a post hoc local model-agnostic explanation method specifically designed for GNNs.
- Score: 81.83769974301995
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) achieve significant performance for various
learning tasks on geometric data due to the incorporation of graph structure
into the learning of node representations, which renders their comprehension
challenging. In this paper, we first propose a unified framework satisfied by
most existing GNN explainers. Then, we introduce GraphSVX, a post hoc local
model-agnostic explanation method specifically designed for GNNs. GraphSVX is a
decomposition technique that captures the "fair" contribution of each feature
and node towards the explained prediction by constructing a surrogate model on
a perturbed dataset. It extends to graphs and ultimately provides as
explanation the Shapley Values from game theory. Experiments on real-world and
synthetic datasets demonstrate that GraphSVX achieves state-of-the-art
performance compared to baseline models while presenting core theoretical and
human-centric properties.
Related papers
- DGNN: Decoupled Graph Neural Networks with Structural Consistency
between Attribute and Graph Embedding Representations [62.04558318166396]
Graph neural networks (GNNs) demonstrate a robust capability for representation learning on graphs with complex structures.
A novel GNNs framework, dubbed Decoupled Graph Neural Networks (DGNN), is introduced to obtain a more comprehensive embedding representation of nodes.
Experimental results conducted on several graph benchmark datasets verify DGNN's superiority in node classification task.
arXiv Detail & Related papers (2024-01-28T06:43:13Z) - GraphGLOW: Universal and Generalizable Structure Learning for Graph
Neural Networks [72.01829954658889]
This paper introduces the mathematical definition of this novel problem setting.
We devise a general framework that coordinates a single graph-shared structure learner and multiple graph-specific GNNs.
The well-trained structure learner can directly produce adaptive structures for unseen target graphs without any fine-tuning.
arXiv Detail & Related papers (2023-06-20T03:33:22Z) - Evaluating Explainability for Graph Neural Networks [21.339111121529815]
We introduce a synthetic graph data generator, ShapeGGen, which can generate a variety of benchmark datasets.
We include ShapeGGen and several real-world graph datasets into an open-source graph explainability library, GraphXAI.
arXiv Detail & Related papers (2022-08-19T13:43:52Z) - Taxonomy of Benchmarks in Graph Representation Learning [14.358071994798964]
Graph Neural Networks (GNNs) extend the success of neural networks to graph-structured data by accounting for their intrinsic geometry.
It is currently not well understood what aspects of a given model are probed by graph representation learning benchmarks.
Here, we develop a principled approach to taxonomize benchmarking datasets according to a $textitsensitivity profile$ that is based on how much GNN performance changes due to a collection of graph perturbations.
arXiv Detail & Related papers (2022-06-15T18:01:10Z) - Node Feature Extraction by Self-Supervised Multi-scale Neighborhood
Prediction [123.20238648121445]
We propose a new self-supervised learning framework, Graph Information Aided Node feature exTraction (GIANT)
GIANT makes use of the eXtreme Multi-label Classification (XMC) formalism, which is crucial for fine-tuning the language model based on graph information.
We demonstrate the superior performance of GIANT over the standard GNN pipeline on Open Graph Benchmark datasets.
arXiv Detail & Related papers (2021-10-29T19:55:12Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z) - Optimal Transport Graph Neural Networks [31.191844909335963]
Current graph neural network (GNN) architectures naively average or sum node embeddings into an aggregated graph representation.
We introduce OT-GNN, a model that computes graph embeddings using parametric prototypes.
arXiv Detail & Related papers (2020-06-08T14:57:39Z) - Incomplete Graph Representation and Learning via Partial Graph Neural
Networks [7.227805463462352]
In many applications, graph may be coming in an incomplete form where attributes of graph nodes are partially unknown/missing.
Existing GNNs are generally designed on complete graphs which can not deal with attribute-incomplete graph data directly.
We develop a novel partial aggregation based GNNs, named Partial Graph Neural Networks (PaGNNs) for attribute-incomplete graph representation and learning.
arXiv Detail & Related papers (2020-03-23T08:29:59Z) - GraphLIME: Local Interpretable Model Explanations for Graph Neural
Networks [45.824642013383944]
Graph neural networks (GNN) were shown to be successful in effectively representing graph structured data.
We propose GraphLIME, a local interpretable model explanation for graphs using the Hilbert-Schmidt Independence Criterion (HSIC) Lasso.
arXiv Detail & Related papers (2020-01-17T09:50:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.