GCExplainer: Human-in-the-Loop Concept-based Explanations for Graph
Neural Networks
- URL: http://arxiv.org/abs/2107.11889v1
- Date: Sun, 25 Jul 2021 20:52:48 GMT
- Title: GCExplainer: Human-in-the-Loop Concept-based Explanations for Graph
Neural Networks
- Authors: Lucie Charlotte Magister, Dmitry Kazhdan, Vikash Singh, Pietro Li\`o
- Abstract summary: GCExplainer is an unsupervised approach for post-hoc discovery and extraction of global concept-based explanations for graph neural networks (GNNs)
We demonstrate the success of our technique on five node classification datasets and two graph classification datasets, showing that we are able to discover and extract high-quality concept representations by putting the human in the loop.
- Score: 0.3441021278275805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While graph neural networks (GNNs) have been shown to perform well on
graph-based data from a variety of fields, they suffer from a lack of
transparency and accountability, which hinders trust and consequently the
deployment of such models in high-stake and safety-critical scenarios. Even
though recent research has investigated methods for explaining GNNs, these
methods are limited to single-instance explanations, also known as local
explanations. Motivated by the aim of providing global explanations, we adapt
the well-known Automated Concept-based Explanation approach (Ghorbani et al.,
2019) to GNN node and graph classification, and propose GCExplainer.
GCExplainer is an unsupervised approach for post-hoc discovery and extraction
of global concept-based explanations for GNNs, which puts the human in the
loop. We demonstrate the success of our technique on five node classification
datasets and two graph classification datasets, showing that we are able to
discover and extract high-quality concept representations by putting the human
in the loop. We achieve a maximum completeness score of 1 and an average
completeness score of 0.753 across the datasets. Finally, we show that the
concept-based explanations provide an improved insight into the datasets and
GNN models compared to the state-of-the-art explanations produced by
GNNExplainer (Ying et al., 2019).
Related papers
- Global Graph Counterfactual Explanation: A Subgraph Mapping Approach [54.42907350881448]
Graph Neural Networks (GNNs) have been widely deployed in various real-world applications.
Counterfactual explanation aims to find minimum perturbations on input graphs that change the GNN predictions.
We propose GlobalGCE, a novel global-level graph counterfactual explanation method.
arXiv Detail & Related papers (2024-10-25T21:39:05Z) - Path-based Explanation for Knowledge Graph Completion [17.541247786437484]
Proper explanations for the results of GNN-based Knowledge Graph Completion models increase model transparency.
Existing practices for explaining KGC tasks rely on instance/subgraph-based approaches.
We propose Power-Link, the first path-based KGC explainer that explores GNN-based models.
arXiv Detail & Related papers (2024-01-04T14:19:37Z) - ACGAN-GNNExplainer: Auxiliary Conditional Generative Explainer for Graph
Neural Networks [7.077341403454516]
Graph neural networks (GNNs) have proven their efficacy in a variety of real-world applications, but their underlying mechanisms remain a mystery.
To address this challenge and enable reliable decision-making, many GNN explainers have been proposed in recent years.
We introduce Auxiliary Generative Adrative Network (ACGAN) into the field of GNN explanation and propose a new GNN explainer dubbedemphACGANGNNExplainer.
arXiv Detail & Related papers (2023-09-29T01:20:28Z) - A Survey on Explainability of Graph Neural Networks [4.612101932762187]
Graph neural networks (GNNs) are powerful graph-based deep-learning models.
This survey aims to provide a comprehensive overview of the existing explainability techniques for GNNs.
arXiv Detail & Related papers (2023-06-02T23:36:49Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - A Variational Edge Partition Model for Supervised Graph Representation
Learning [51.30365677476971]
This paper introduces a graph generative process to model how the observed edges are generated by aggregating the node interactions over a set of overlapping node communities.
We partition each edge into the summation of multiple community-specific weighted edges and use them to define community-specific GNNs.
A variational inference framework is proposed to jointly learn a GNN based inference network that partitions the edges into different communities, these community-specific GNNs, and a GNN based predictor that combines community-specific GNNs for the end classification task.
arXiv Detail & Related papers (2022-02-07T14:37:50Z) - Reasoning Graph Networks for Kinship Verification: from Star-shaped to
Hierarchical [85.0376670244522]
We investigate the problem of facial kinship verification by learning hierarchical reasoning graph networks.
We develop a Star-shaped Reasoning Graph Network (S-RGN) to exploit more powerful and flexible capacity.
We also develop a Hierarchical Reasoning Graph Network (H-RGN) to exploit more powerful and flexible capacity.
arXiv Detail & Related papers (2021-09-06T03:16:56Z) - Towards Self-Explainable Graph Neural Network [24.18369781999988]
Graph Neural Networks (GNNs) generalize the deep neural networks to graph-structured data.
GNNs lack explainability, which limits their adoption in scenarios that demand the transparency of models.
We propose a new framework which can find $K$-nearest labeled nodes for each unlabeled node to give explainable node classification.
arXiv Detail & Related papers (2021-08-26T22:45:11Z) - Parameterized Explainer for Graph Neural Network [49.79917262156429]
We propose PGExplainer, a parameterized explainer for Graph Neural Networks (GNNs)
Compared to the existing work, PGExplainer has better generalization ability and can be utilized in an inductive setting easily.
Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7% relative improvement in AUC on explaining graph classification.
arXiv Detail & Related papers (2020-11-09T17:15:03Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.