Global Concept-Based Interpretability for Graph Neural Networks via
Neuron Analysis
- URL: http://arxiv.org/abs/2208.10609v1
- Date: Mon, 22 Aug 2022 21:30:55 GMT
- Title: Global Concept-Based Interpretability for Graph Neural Networks via
Neuron Analysis
- Authors: Han Xuanyuan, Pietro Barbiero, Dobrik Georgiev, Lucie Charlotte
Magister, Pietro Li\'o
- Abstract summary: Graph neural networks (GNNs) are highly effective on a variety of graph-related tasks.
They lack interpretability and transparency.
Current explainability approaches are typically local and treat GNNs as black-boxes.
We propose a novel approach for producing global explanations for GNNs using neuron-level concepts.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks (GNNs) are highly effective on a variety of
graph-related tasks; however, they lack interpretability and transparency.
Current explainability approaches are typically local and treat GNNs as
black-boxes. They do not look inside the model, inhibiting human trust in the
model and explanations. Motivated by the ability of neurons to detect
high-level semantic concepts in vision models, we perform a novel analysis on
the behaviour of individual GNN neurons to answer questions about GNN
interpretability, and propose new metrics for evaluating the interpretability
of GNN neurons. We propose a novel approach for producing global explanations
for GNNs using neuron-level concepts to enable practitioners to have a
high-level view of the model. Specifically, (i) to the best of our knowledge,
this is the first work which shows that GNN neurons act as concept detectors
and have strong alignment with concepts formulated as logical compositions of
node degree and neighbourhood properties; (ii) we quantitatively assess the
importance of detected concepts, and identify a trade-off between training
duration and neuron-level interpretability; (iii) we demonstrate that our
global explainability approach has advantages over the current state-of-the-art
-- we can disentangle the explanation into individual interpretable concepts
backed by logical descriptions, which reduces potential for bias and improves
user-friendliness.
Related papers
- Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
We introduce Artificial Kuramotoy Neurons (AKOrN) as a dynamical alternative to threshold units.
We show that this idea provides performance improvements across a wide spectrum of tasks.
We believe that these empirical results show the importance of our assumptions at the most basic neuronal level of neural representation.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - ConceptLens: from Pixels to Understanding [1.3466710708566176]
ConceptLens is an innovative tool designed to illuminate the intricate workings of deep neural networks (DNNs) by visualizing hidden neuron activations.
By integrating deep learning with symbolic methods, ConceptLens offers users a unique way to understand what triggers neuron activations.
arXiv Detail & Related papers (2024-10-04T20:49:12Z) - Uncertainty in Graph Neural Networks: A Survey [50.63474656037679]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.
However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.
This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Automated Natural Language Explanation of Deep Visual Neurons with Large
Models [43.178568768100305]
This paper proposes a novel post-hoc framework for generating semantic explanations of neurons with large foundation models.
Our framework is designed to be compatible with various model architectures and datasets, automated and scalable neuron interpretation.
arXiv Detail & Related papers (2023-10-16T17:04:51Z) - A Survey on Explainability of Graph Neural Networks [4.612101932762187]
Graph neural networks (GNNs) are powerful graph-based deep-learning models.
This survey aims to provide a comprehensive overview of the existing explainability techniques for GNNs.
arXiv Detail & Related papers (2023-06-02T23:36:49Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Transferability of coVariance Neural Networks and Application to
Interpretable Brain Age Prediction using Anatomical Features [119.45320143101381]
Graph convolutional networks (GCN) leverage topology-driven graph convolutional operations to combine information across the graph for inference tasks.
We have studied GCNs with covariance matrices as graphs in the form of coVariance neural networks (VNNs)
VNNs inherit the scale-free data processing architecture from GCNs and here, we show that VNNs exhibit transferability of performance over datasets whose covariance matrices converge to a limit object.
arXiv Detail & Related papers (2023-05-02T22:15:54Z) - A Survey of Explainable Graph Neural Networks: Taxonomy and Evaluation
Metrics [8.795591344648294]
We focus on explainable graph neural networks and categorize them based on the use of explainable methods.
We provide the common performance metrics for GNNs explanations and point out several future research directions.
arXiv Detail & Related papers (2022-07-26T01:45:54Z) - Explainability in Graph Neural Networks: An Experimental Survey [12.440636971075977]
Graph neural networks (GNNs) have been extensively developed for graph representation learning.
GNNs suffer from the black-box problem as people cannot understand the mechanism underlying them.
Several GNN explainability methods have been proposed to explain the decisions made by GNNs.
arXiv Detail & Related papers (2022-03-17T11:25:41Z) - A Chain Graph Interpretation of Real-World Neural Networks [58.78692706974121]
We propose an alternative interpretation that identifies NNs as chain graphs (CGs) and feed-forward as an approximate inference procedure.
The CG interpretation specifies the nature of each NN component within the rich theoretical framework of probabilistic graphical models.
We demonstrate with concrete examples that the CG interpretation can provide novel theoretical support and insights for various NN techniques.
arXiv Detail & Related papers (2020-06-30T14:46:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.