EiX-GNN : Concept-level eigencentrality explainer for graph neural
networks
- URL: http://arxiv.org/abs/2206.03491v1
- Date: Tue, 7 Jun 2022 07:45:45 GMT
- Title: EiX-GNN : Concept-level eigencentrality explainer for graph neural
networks
- Authors: Pascal Bourdon (XLIM-ASALI), David Helbert (XLIM-ASALI), Adrien Raison
- Abstract summary: We propose a reliable social-aware explaining method suited for graph neural network models.
Our method takes into account the human-dependent aspect underlying any explanation process.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explaining is a human knowledge transfer process regarding a phenomenon
between an explainer and an explainee. Each word used to explain this
phenomenon must be carefully selected by the explainer in accordance with the
current explainee phenomenon-related knowledge level and the phenomenon itself
in order to have a high understanding from the explainee of the phenomenon.
Nowadays, deep models, especially graph neural networks, have a major place in
daily life even in critical applications. In such context, those models need to
have a human high interpretability also referred as being explainable, in order
to improve usage trustability of them in sensitive cases. Explaining is also a
human dependent task and methods that explain deep model behavior must include
these social-related concerns for providing profitable and quality
explanations. Current explaining methods often occlude such social aspect for
providing their explanations and only focus on the signal aspect of the
question. In this contribution we propose a reliable social-aware explaining
method suited for graph neural network that includes this social feature as a
modular concept generator and by both leveraging signal and graph domain aspect
thanks to an eigencentrality concept ordering approach. Besides our method
takes into account the human-dependent aspect underlying any explanation
process, we also reach high score regarding state-of-the-art objective metrics
assessing explanation methods for graph neural networks models.
Related papers
- Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Quantifying the Intrinsic Usefulness of Attributional Explanations for
Graph Neural Networks with Artificial Simulatability Studies [1.2891210250935146]
We extend artificial simulatability studies to the domain of graph neural networks.
Instead of costly human trials, we use explanation-supervisable graph neural networks to perform simulatability studies.
We find that relevant explanations can significantly boost the sample efficiency of graph neural networks.
arXiv Detail & Related papers (2023-05-25T11:59:42Z) - NeuroExplainer: Fine-Grained Attention Decoding to Uncover Cortical
Development Patterns of Preterm Infants [73.85768093666582]
We propose an explainable geometric deep network dubbed NeuroExplainer.
NeuroExplainer is used to uncover altered infant cortical development patterns associated with preterm birth.
arXiv Detail & Related papers (2023-01-01T12:48:12Z) - Mapping Knowledge Representations to Concepts: A Review and New
Perspectives [0.6875312133832078]
This review focuses on research that aims to associate internal representations with human understandable concepts.
We find this taxonomy and theories of causality, useful for understanding what can be expected, and not expected, from neural network explanations.
The analysis additionally uncovers an ambiguity in the reviewed literature related to the goal of model explainability.
arXiv Detail & Related papers (2022-12-31T12:56:12Z) - Global Concept-Based Interpretability for Graph Neural Networks via
Neuron Analysis [0.0]
Graph neural networks (GNNs) are highly effective on a variety of graph-related tasks.
They lack interpretability and transparency.
Current explainability approaches are typically local and treat GNNs as black-boxes.
We propose a novel approach for producing global explanations for GNNs using neuron-level concepts.
arXiv Detail & Related papers (2022-08-22T21:30:55Z) - An explainability framework for cortical surface-based deep learning [110.83289076967895]
We develop a framework for cortical surface-based deep learning.
First, we adapted a perturbation-based approach for use with surface data.
We show that our explainability framework is not only able to identify important features and their spatial location but that it is also reliable and valid.
arXiv Detail & Related papers (2022-03-15T23:16:49Z) - Human-Centered Concept Explanations for Neural Networks [47.71169918421306]
We introduce concept explanations including the class of Concept Activation Vectors (CAV)
We then discuss approaches to automatically extract concepts, and approaches to address some of their caveats.
Finally, we discuss some case studies that showcase the utility of such concept-based explanations in synthetic settings and real world applications.
arXiv Detail & Related papers (2022-02-25T01:27:31Z) - Combining Sub-Symbolic and Symbolic Methods for Explainability [1.3777144060953146]
A number of sub-symbolic approaches have been developed to provide insights into the GNN decision making process.
These are first important steps on the way to explainability, but the generated explanations are often hard to understand for users that are not AI experts.
We introduce a conceptual approach combining sub-symbolic and symbolic methods for human-centric explanations.
arXiv Detail & Related papers (2021-12-03T10:57:00Z) - A Heterogeneous Graph with Factual, Temporal and Logical Knowledge for
Question Answering Over Dynamic Contexts [81.4757750425247]
We study question answering over a dynamic textual environment.
We develop a graph neural network over the constructed graph, and train the model in an end-to-end manner.
arXiv Detail & Related papers (2020-04-25T04:53:54Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.