Explainable Artificial Intelligence (XAI) on TimeSeries Data: A Survey
- URL: http://arxiv.org/abs/2104.00950v1
- Date: Fri, 2 Apr 2021 09:14:00 GMT
- Title: Explainable Artificial Intelligence (XAI) on TimeSeries Data: A Survey
- Authors: Thomas Rojat, Rapha\"el Puget, David Filliat, Javier Del Ser, Rodolphe
Gelin, and Natalia D\'iaz-Rodr\'iguez
- Abstract summary: We present an overview of existing explainable AI (XAI) methods applied on time series.
We also provide a reflection on the impact of these explanation methods to provide confidence and trust in the AI systems.
- Score: 7.211834628554803
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most of state of the art methods applied on time series consist of deep
learning methods that are too complex to be interpreted. This lack of
interpretability is a major drawback, as several applications in the real world
are critical tasks, such as the medical field or the autonomous driving field.
The explainability of models applied on time series has not gather much
attention compared to the computer vision or the natural language processing
fields. In this paper, we present an overview of existing explainable AI (XAI)
methods applied on time series and illustrate the type of explanations they
produce. We also provide a reflection on the impact of these explanation
methods to provide confidence and trust in the AI systems.
Related papers
- XAI for time-series classification leveraging image highlight methods [0.0699049312989311]
We present a Deep Neural Network (DNN) in a teacher-student architecture (distillation model) that offers interpretability in time-series classification tasks.
arXiv Detail & Related papers (2023-11-28T10:59:18Z) - Is Task-Agnostic Explainable AI a Myth? [0.0]
Our work serves as a framework for unifying the challenges of contemporary explainable AI (XAI)
We demonstrate that while XAI methods provide supplementary and potentially useful output for machine learning models, researchers and decision-makers should be mindful of their conceptual and technical limitations.
We examine three XAI research avenues spanning image, textual, and graph data, covering saliency, attention, and graph-type explainers.
arXiv Detail & Related papers (2023-07-13T07:48:04Z) - Interpretation of Time-Series Deep Models: A Survey [27.582644914283136]
We present a wide range of post-hoc interpretation methods for time-series models based on backpropagation, perturbation, and approximation.
We also want to bring focus onto inherently interpretable models, a novel category of interpretation where human-understandable information is designed within the models.
arXiv Detail & Related papers (2023-05-23T23:43:26Z) - Explainable AI: current status and future directions [11.92436948211501]
Explainable Artificial Intelligence (XAI) is an emerging area of research in the field of Artificial Intelligence (AI)
XAI can explain how AI obtained a particular solution and can also answer other "wh" questions.
This paper provides an overview of these techniques from a multimedia (i.e., text, image, audio, and video) point of view.
arXiv Detail & Related papers (2021-07-12T08:42:19Z) - On the Post-hoc Explainability of Deep Echo State Networks for Time
Series Forecasting, Image and Video Classification [63.716247731036745]
echo state networks have attracted many stares through time, mainly due to the simplicity and computational efficiency of their learning algorithm.
This work addresses this issue by conducting an explainability study of Echo State Networks when applied to learning tasks with time series, image and video data.
Specifically, the study proposes three different techniques capable of eliciting understandable information about the knowledge grasped by these recurrent models.
arXiv Detail & Related papers (2021-02-17T08:56:33Z) - Multi-Agent Reinforcement Learning with Temporal Logic Specifications [65.79056365594654]
We study the problem of learning to satisfy temporal logic specifications with a group of agents in an unknown environment.
We develop the first multi-agent reinforcement learning technique for temporal logic specifications.
We provide correctness and convergence guarantees for our main algorithm.
arXiv Detail & Related papers (2021-02-01T01:13:03Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - Knowledge as Invariance -- History and Perspectives of
Knowledge-augmented Machine Learning [69.99522650448213]
Research in machine learning is at a turning point.
Research interests are shifting away from increasing the performance of highly parameterized models to exceedingly specific tasks.
This white paper provides an introduction and discussion of this emerging field in machine learning research.
arXiv Detail & Related papers (2020-12-21T15:07:19Z) - An Empirical Study of Explainable AI Techniques on Deep Learning Models
For Time Series Tasks [18.70973390984415]
Decision explanations of machine learning black-box models are often generated by applying Explainable AI (XAI) techniques.
Evaluation and verification are usually achieved with a visual interpretation by humans on individual images or text.
We propose an empirical study and benchmark framework to apply attribution methods for neural networks developed for images and text data on time series.
arXiv Detail & Related papers (2020-12-08T10:33:57Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.