Deep Learning Reproducibility and Explainable AI (XAI)
- URL: http://arxiv.org/abs/2202.11452v1
- Date: Wed, 23 Feb 2022 12:06:20 GMT
- Title: Deep Learning Reproducibility and Explainable AI (XAI)
- Authors: A.-M. Leventi-Peetz and T. \"Ostreich
- Abstract summary: The nondeterminism of Deep Learning (DL) training algorithms and its influence on the explainability of neural network (NN) models are investigated.
To discuss the issue, two convolutional neural networks (CNN) have been trained and their results compared.
- Score: 9.13755431537592
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The nondeterminism of Deep Learning (DL) training algorithms and its
influence on the explainability of neural network (NN) models are investigated
in this work with the help of image classification examples. To discuss the
issue, two convolutional neural networks (CNN) have been trained and their
results compared. The comparison serves the exploration of the feasibility of
creating deterministic, robust DL models and deterministic explainable
artificial intelligence (XAI) in practice. Successes and limitation of all here
carried out efforts are described in detail. The source code of the attained
deterministic models has been listed in this work. Reproducibility is indexed
as a development-phase-component of the Model Governance Framework, proposed by
the EU within their excellence in AI approach. Furthermore, reproducibility is
a requirement for establishing causality for the interpretation of model
results and building of trust towards the overwhelming expansion of AI systems
applications. Problems that have to be solved on the way to reproducibility and
ways to deal with some of them, are examined in this work.
Related papers
- SynthTree: Co-supervised Local Model Synthesis for Explainable Prediction [15.832975722301011]
We propose a novel method to enhance explainability with minimal accuracy loss.
We have developed novel methods for estimating nodes by leveraging AI techniques.
Our findings highlight the critical role that statistical methodologies can play in advancing explainable AI.
arXiv Detail & Related papers (2024-06-16T14:43:01Z) - A Novel Neural-symbolic System under Statistical Relational Learning [50.747658038910565]
We propose a general bi-level probabilistic graphical reasoning framework called GBPGR.
In GBPGR, the results of symbolic reasoning are utilized to refine and correct the predictions made by the deep learning models.
Our approach achieves high performance and exhibits effective generalization in both transductive and inductive tasks.
arXiv Detail & Related papers (2023-09-16T09:15:37Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - A Detailed Study of Interpretability of Deep Neural Network based Top
Taggers [3.8541104292281805]
Recent developments in explainable AI (XAI) allow researchers to explore the inner workings of deep neural networks (DNNs)
We explore interpretability of models designed to identify jets coming from top quark decay in high energy proton-proton collisions at the Large Hadron Collider (LHC)
Our studies uncover some major pitfalls of existing XAI methods and illustrate how they can be overcome to obtain consistent and meaningful interpretation of these models.
arXiv Detail & Related papers (2022-10-09T23:02:42Z) - Neural Causal Models for Counterfactual Identification and Estimation [62.30444687707919]
We study the evaluation of counterfactual statements through neural models.
First, we show that neural causal models (NCMs) are expressive enough.
Second, we develop an algorithm for simultaneously identifying and estimating counterfactual distributions.
arXiv Detail & Related papers (2022-09-30T18:29:09Z) - Quality Diversity Evolutionary Learning of Decision Trees [4.447467536572625]
We show that MAP-Elites can diversify hybrid models over a feature space that captures both the model complexity and its behavioral variability.
We apply our method on two well-known control problems from the OpenAI Gym library, on which we discuss the "illumination" patterns projected by MAP-Elites.
arXiv Detail & Related papers (2022-08-17T13:57:32Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.