Combining Sub-Symbolic and Symbolic Methods for Explainability
- URL: http://arxiv.org/abs/2112.01844v1
- Date: Fri, 3 Dec 2021 10:57:00 GMT
- Title: Combining Sub-Symbolic and Symbolic Methods for Explainability
- Authors: Anna Himmelhuber, Stephan Grimm, Sonja Zillner, Mitchell Joblin,
Martin Ringsquandl and Thomas Runkler
- Abstract summary: A number of sub-symbolic approaches have been developed to provide insights into the GNN decision making process.
These are first important steps on the way to explainability, but the generated explanations are often hard to understand for users that are not AI experts.
We introduce a conceptual approach combining sub-symbolic and symbolic methods for human-centric explanations.
- Score: 1.3777144060953146
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Similarly to other connectionist models, Graph Neural Networks (GNNs) lack
transparency in their decision-making. A number of sub-symbolic approaches have
been developed to provide insights into the GNN decision making process. These
are first important steps on the way to explainability, but the generated
explanations are often hard to understand for users that are not AI experts. To
overcome this problem, we introduce a conceptual approach combining
sub-symbolic and symbolic methods for human-centric explanations, that
incorporate domain knowledge and causality. We furthermore introduce the notion
of fidelity as a metric for evaluating how close the explanation is to the
GNN's internal decision making process. The evaluation with a chemical dataset
and ontology shows the explanatory value and reliability of our method.
Related papers
- Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Generative Explanations for Graph Neural Network: Methods and
Evaluations [16.67839967139831]
Graph Neural Networks (GNNs) achieve state-of-the-art performance in various graph-related tasks.
The black-box nature of GNNs limits their interpretability and trustworthiness.
Numerous explainability methods have been proposed to uncover the decision-making logic of GNNs.
arXiv Detail & Related papers (2023-11-09T22:07:15Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - EiX-GNN : Concept-level eigencentrality explainer for graph neural
networks [0.0]
We propose a reliable social-aware explaining method suited for graph neural network models.
Our method takes into account the human-dependent aspect underlying any explanation process.
arXiv Detail & Related papers (2022-06-07T07:45:45Z) - Neuro-Symbolic Learning of Answer Set Programs from Raw Data [54.56905063752427]
Neuro-Symbolic AI aims to combine interpretability of symbolic techniques with the ability of deep learning to learn from raw data.
We introduce Neuro-Symbolic Inductive Learner (NSIL), an approach that trains a general neural network to extract latent concepts from raw data.
NSIL learns expressive knowledge, solves computationally complex problems, and achieves state-of-the-art performance in terms of accuracy and data efficiency.
arXiv Detail & Related papers (2022-05-25T12:41:59Z) - CX-ToM: Counterfactual Explanations with Theory-of-Mind for Enhancing
Human Trust in Image Recognition Models [84.32751938563426]
We propose a new explainable AI (XAI) framework for explaining decisions made by a deep convolutional neural network (CNN)
In contrast to the current methods in XAI that generate explanations as a single shot response, we pose explanation as an iterative communication process.
Our framework generates sequence of explanations in a dialog by mediating the differences between the minds of machine and human user.
arXiv Detail & Related papers (2021-09-03T09:46:20Z) - Entropy-based Logic Explanations of Neural Networks [24.43410365335306]
We propose an end-to-end differentiable approach for extracting logic explanations from neural networks.
The method relies on an entropy-based criterion which automatically identifies the most relevant concepts.
We consider four different case studies to demonstrate that: (i) this entropy-based criterion enables the distillation of concise logic explanations in safety-critical domains from clinical data to computer vision; (ii) the proposed approach outperforms state-of-the-art white-box models in terms of classification accuracy.
arXiv Detail & Related papers (2021-06-12T15:50:47Z) - A Peek Into the Reasoning of Neural Networks: Interpreting with
Structural Visual Concepts [38.215184251799194]
We propose a framework (VRX) to interpret classification NNs with intuitive structural visual concepts.
By means of knowledge distillation, we show VRX can take a step towards mimicking the reasoning process of NNs.
arXiv Detail & Related papers (2021-05-01T15:47:42Z) - Distilling neural networks into skipgram-level decision lists [4.109840601429086]
We propose a pipeline to explain RNNs by means of decision lists (also called rules) over skipgrams.
We find that our technique persistently achieves high explanation fidelity and qualitatively interpretable rules.
arXiv Detail & Related papers (2020-05-14T16:25:42Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.