Everybody Needs a Little HELP: Explaining Graphs via Hierarchical
Concepts
- URL: http://arxiv.org/abs/2311.15112v2
- Date: Sat, 2 Dec 2023 10:44:33 GMT
- Title: Everybody Needs a Little HELP: Explaining Graphs via Hierarchical
Concepts
- Authors: Jonas J\"ur{\ss}, Lucie Charlotte Magister, Pietro Barbiero, Pietro
Li\`o, Nikola Simidjievski
- Abstract summary: Graph neural networks (GNNs) have led to breakthroughs in domains such as drug discovery, social network analysis, and travel time estimation.
They lack interpretability which hinders human trust and thereby deployment to settings with high-stakes decisions.
We provide HELP, a novel, inherently interpretable graph pooling approach that reveals how concepts from different GNN layers compose to new ones in later steps.
- Score: 12.365451175795338
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks (GNNs) have led to major breakthroughs in a variety of
domains such as drug discovery, social network analysis, and travel time
estimation. However, they lack interpretability which hinders human trust and
thereby deployment to settings with high-stakes decisions. A line of
interpretable methods approach this by discovering a small set of relevant
concepts as subgraphs in the last GNN layer that together explain the
prediction. This can yield oversimplified explanations, failing to explain the
interaction between GNN layers. To address this oversight, we provide HELP
(Hierarchical Explainable Latent Pooling), a novel, inherently interpretable
graph pooling approach that reveals how concepts from different GNN layers
compose to new ones in later steps. HELP is more than 1-WL expressive and is
the first non-spectral, end-to-end-learnable, hierarchical graph pooling method
that can learn to pool a variable number of arbitrary connected components. We
empirically demonstrate that it performs on-par with standard GCNs and popular
pooling methods in terms of accuracy while yielding explanations that are
aligned with expert knowledge in the domains of chemistry and social networks.
In addition to a qualitative analysis, we employ concept completeness scores as
well as concept conformity, a novel metric to measure the noise in discovered
concepts, quantitatively verifying that the discovered concepts are
significantly easier to fully understand than those from previous work. Our
work represents a first step towards an understanding of graph neural networks
that goes beyond a set of concepts from the final layer and instead explains
the complex interplay of concepts on different levels.
Related papers
- DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Explaining the Explainers in Graph Neural Networks: a Comparative Study [23.483694828580894]
Graph Neural Networks (GNNs) have reached a widespread application in many science and engineering fields.
GNN explainers have started to emerge in recent years, with a multitude of methods both novel or adapted from other domains.
arXiv Detail & Related papers (2022-10-27T10:25:51Z) - Toward Multiple Specialty Learners for Explaining GNNs via Online
Knowledge Distillation [0.17842332554022688]
Graph Neural Networks (GNNs) have become increasingly ubiquitous in numerous applications and systems, necessitating explanations of their predictions.
We propose a novel GNN explanation framework named SCALE, which is general and fast for explaining predictions.
In training, a black-box GNN model guides learners based on an online knowledge distillation paradigm.
Specifically, edge masking and random walk with restart procedures are executed to provide structural explanations for graph-level and node-level predictions.
arXiv Detail & Related papers (2022-10-20T08:44:57Z) - Global Concept-Based Interpretability for Graph Neural Networks via
Neuron Analysis [0.0]
Graph neural networks (GNNs) are highly effective on a variety of graph-related tasks.
They lack interpretability and transparency.
Current explainability approaches are typically local and treat GNNs as black-boxes.
We propose a novel approach for producing global explanations for GNNs using neuron-level concepts.
arXiv Detail & Related papers (2022-08-22T21:30:55Z) - Discovering the Representation Bottleneck of Graph Neural Networks from
Multi-order Interactions [51.597480162777074]
Graph neural networks (GNNs) rely on the message passing paradigm to propagate node features and build interactions.
Recent works point out that different graph learning tasks require different ranges of interactions between nodes.
We study two common graph construction methods in scientific domains, i.e., emphK-nearest neighbor (KNN) graphs and emphfully-connected (FC) graphs.
arXiv Detail & Related papers (2022-05-15T11:38:14Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - GCExplainer: Human-in-the-Loop Concept-based Explanations for Graph
Neural Networks [0.3441021278275805]
GCExplainer is an unsupervised approach for post-hoc discovery and extraction of global concept-based explanations for graph neural networks (GNNs)
We demonstrate the success of our technique on five node classification datasets and two graph classification datasets, showing that we are able to discover and extract high-quality concept representations by putting the human in the loop.
arXiv Detail & Related papers (2021-07-25T20:52:48Z) - A Peek Into the Reasoning of Neural Networks: Interpreting with
Structural Visual Concepts [38.215184251799194]
We propose a framework (VRX) to interpret classification NNs with intuitive structural visual concepts.
By means of knowledge distillation, we show VRX can take a step towards mimicking the reasoning process of NNs.
arXiv Detail & Related papers (2021-05-01T15:47:42Z) - Parameterized Explainer for Graph Neural Network [49.79917262156429]
We propose PGExplainer, a parameterized explainer for Graph Neural Networks (GNNs)
Compared to the existing work, PGExplainer has better generalization ability and can be utilized in an inductive setting easily.
Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7% relative improvement in AUC on explaining graph classification.
arXiv Detail & Related papers (2020-11-09T17:15:03Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - A Chain Graph Interpretation of Real-World Neural Networks [58.78692706974121]
We propose an alternative interpretation that identifies NNs as chain graphs (CGs) and feed-forward as an approximate inference procedure.
The CG interpretation specifies the nature of each NN component within the rich theoretical framework of probabilistic graphical models.
We demonstrate with concrete examples that the CG interpretation can provide novel theoretical support and insights for various NN techniques.
arXiv Detail & Related papers (2020-06-30T14:46:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.