Quantifying the Intrinsic Usefulness of Attributional Explanations for
Graph Neural Networks with Artificial Simulatability Studies
- URL: http://arxiv.org/abs/2305.15961v1
- Date: Thu, 25 May 2023 11:59:42 GMT
- Title: Quantifying the Intrinsic Usefulness of Attributional Explanations for
Graph Neural Networks with Artificial Simulatability Studies
- Authors: Jonas Teufel, Luca Torresi, Pascal Friederich
- Abstract summary: We extend artificial simulatability studies to the domain of graph neural networks.
Instead of costly human trials, we use explanation-supervisable graph neural networks to perform simulatability studies.
We find that relevant explanations can significantly boost the sample efficiency of graph neural networks.
- Score: 1.2891210250935146
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite the increasing relevance of explainable AI, assessing the quality of
explanations remains a challenging issue. Due to the high costs associated with
human-subject experiments, various proxy metrics are often used to
approximately quantify explanation quality. Generally, one possible
interpretation of the quality of an explanation is its inherent value for
teaching a related concept to a student. In this work, we extend artificial
simulatability studies to the domain of graph neural networks. Instead of
costly human trials, we use explanation-supervisable graph neural networks to
perform simulatability studies to quantify the inherent usefulness of
attributional graph explanations. We perform an extensive ablation study to
investigate the conditions under which the proposed analyses are most
meaningful. We additionally validate our methods applicability on real-world
graph classification and regression datasets. We find that relevant
explanations can significantly boost the sample efficiency of graph neural
networks and analyze the robustness towards noise and bias in the explanations.
We believe that the notion of usefulness obtained from our proposed
simulatability analysis provides a dimension of explanation quality that is
largely orthogonal to the common practice of faithfulness and has great
potential to expand the toolbox of explanation quality assessments,
specifically for graph explanations.
Related papers
- Improving Network Interpretability via Explanation Consistency Evaluation [56.14036428778861]
We propose a framework that acquires more explainable activation heatmaps and simultaneously increase the model performance.
Specifically, our framework introduces a new metric, i.e., explanation consistency, to reweight the training samples adaptively in model learning.
Our framework then promotes the model learning by paying closer attention to those training samples with a high difference in explanations.
arXiv Detail & Related papers (2024-08-08T17:20:08Z) - CNN-based explanation ensembling for dataset, representation and explanations evaluation [1.1060425537315088]
We explore the potential of ensembling explanations generated by deep classification models using convolutional model.
Through experimentation and analysis, we aim to investigate the implications of combining explanations to uncover a more coherent and reliable patterns of the model's behavior.
arXiv Detail & Related papers (2024-04-16T08:39:29Z) - PAC Learnability under Explanation-Preserving Graph Perturbations [15.83659369727204]
Graph neural networks (GNNs) operate over graphs, enabling the model to leverage the complex relationships and dependencies in graph-structured data.
A graph explanation is a subgraph which is an almost sufficient' statistic of the input graph with respect to its classification label.
This work considers two methods for leveraging such perturbation invariances in the design and training of GNNs.
arXiv Detail & Related papers (2024-02-07T17:23:15Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - XInsight: Revealing Model Insights for GNNs with Flow-based Explanations [0.0]
Many high-stakes applications, such as drug discovery, require human-intelligible explanations from the models.
We propose an explainability algorithm for GNNs called XInsight that generates a distribution of model explanations using GFlowNets.
We show the utility of XInsight's explanations by analyzing the generated compounds using QSAR modeling.
arXiv Detail & Related papers (2023-06-07T21:25:32Z) - MEGAN: Multi-Explanation Graph Attention Network [1.1470070927586016]
We propose a multi-explanation graph attention network (MEGAN)
Unlike existing graph explainability methods, our network can produce node and edge attributional explanations along multiple channels.
Our attention-based network is fully differentiable and explanations can actively be trained in an explanation-supervised manner.
arXiv Detail & Related papers (2022-11-23T16:10:13Z) - Towards Explanation for Unsupervised Graph-Level Representation Learning [108.31036962735911]
Existing explanation methods focus on the supervised settings, eg, node classification and graph classification, while the explanation for unsupervised graph-level representation learning is still unexplored.
In this paper, we advance the Information Bottleneck principle (IB) to tackle the proposed explanation problem for unsupervised graph representations, which leads to a novel principle, textitUnsupervised Subgraph Information Bottleneck (USIB)
We also theoretically analyze the connection between graph representations and explanatory subgraphs on the label space, which reveals that the robustness of representations benefit the fidelity of explanatory subgraphs.
arXiv Detail & Related papers (2022-05-20T02:50:15Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - A Heterogeneous Graph with Factual, Temporal and Logical Knowledge for
Question Answering Over Dynamic Contexts [81.4757750425247]
We study question answering over a dynamic textual environment.
We develop a graph neural network over the constructed graph, and train the model in an end-to-end manner.
arXiv Detail & Related papers (2020-04-25T04:53:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.