An Empirical Study of Explainable AI Techniques on Deep Learning Models
For Time Series Tasks
- URL: http://arxiv.org/abs/2012.04344v1
- Date: Tue, 8 Dec 2020 10:33:57 GMT
- Title: An Empirical Study of Explainable AI Techniques on Deep Learning Models
For Time Series Tasks
- Authors: Udo Schlegel, Daniela Oelke, Daniel A. Keim, Mennatallah El-Assady
- Abstract summary: Decision explanations of machine learning black-box models are often generated by applying Explainable AI (XAI) techniques.
Evaluation and verification are usually achieved with a visual interpretation by humans on individual images or text.
We propose an empirical study and benchmark framework to apply attribution methods for neural networks developed for images and text data on time series.
- Score: 18.70973390984415
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Decision explanations of machine learning black-box models are often
generated by applying Explainable AI (XAI) techniques. However, many proposed
XAI methods produce unverified outputs. Evaluation and verification are usually
achieved with a visual interpretation by humans on individual images or text.
In this preregistration, we propose an empirical study and benchmark framework
to apply attribution methods for neural networks developed for images and text
data on time series. We present a methodology to automatically evaluate and
rank attribution techniques on time series using perturbation methods to
identify reliable approaches.
Related papers
- AI-Aided Kalman Filters [65.35350122917914]
The Kalman filter (KF) and its variants are among the most celebrated algorithms in signal processing.
Recent developments illustrate the possibility of fusing deep neural networks (DNNs) with classic Kalman-type filtering.
This article provides a tutorial-style overview of design approaches for incorporating AI in aiding KF-type algorithms.
arXiv Detail & Related papers (2024-10-16T06:47:53Z) - Information Theoretic Text-to-Image Alignment [49.396917351264655]
We present a novel method that relies on an information-theoretic alignment measure to steer image generation.
Our method is on-par or superior to the state-of-the-art, yet requires nothing but a pre-trained denoising network to estimate MI.
arXiv Detail & Related papers (2024-05-31T12:20:02Z) - EXACT: Towards a platform for empirically benchmarking Machine Learning model explanation methods [1.6383837447674294]
This paper brings together various benchmark datasets and novel performance metrics in an initial benchmarking platform.
Our datasets incorporate ground truth explanations for class-conditional features.
This platform assesses the performance of post-hoc XAI methods in the quality of the explanations they produce.
arXiv Detail & Related papers (2024-05-20T14:16:06Z) - A Deep Dive into Perturbations as Evaluation Technique for Time Series
XAI [13.269396832189754]
XAI for time series data has become increasingly important in finance, healthcare, and climate science.
evaluating the quality of explanations, such as attributions provided by XAI techniques, remains challenging.
This paper provides an in-depth analysis of using perturbations to evaluate attributions extracted from time series models.
arXiv Detail & Related papers (2023-07-11T08:26:08Z) - Quantitative Analysis of Primary Attribution Explainable Artificial
Intelligence Methods for Remote Sensing Image Classification [0.4532517021515834]
We leverage state-of-the-art machine learning approaches to perform remote sensing image classification.
We offer insights and recommendations for selecting the most appropriate XAI method.
arXiv Detail & Related papers (2023-06-06T22:04:45Z) - Model-Based Deep Learning: On the Intersection of Deep Learning and
Optimization [101.32332941117271]
Decision making algorithms are used in a multitude of different applications.
Deep learning approaches that use highly parametric architectures tuned from data without relying on mathematical models are becoming increasingly popular.
Model-based optimization and data-centric deep learning are often considered to be distinct disciplines.
arXiv Detail & Related papers (2022-05-05T13:40:08Z) - TorchEsegeta: Framework for Interpretability and Explainability of
Image-based Deep Learning Models [0.0]
Clinicians are often sceptical about applying automatic image processing approaches, especially deep learning based methods, in practice.
This paper presents approaches that help to interpret and explain the results of deep learning algorithms by depicting the anatomical areas which influence the decision of the algorithm most.
Research presents a unified framework, TorchEsegeta, for applying various interpretability and explainability techniques for deep learning models.
arXiv Detail & Related papers (2021-10-16T01:00:15Z) - Explainable Artificial Intelligence (XAI) on TimeSeries Data: A Survey [7.211834628554803]
We present an overview of existing explainable AI (XAI) methods applied on time series.
We also provide a reflection on the impact of these explanation methods to provide confidence and trust in the AI systems.
arXiv Detail & Related papers (2021-04-02T09:14:00Z) - On the Post-hoc Explainability of Deep Echo State Networks for Time
Series Forecasting, Image and Video Classification [63.716247731036745]
echo state networks have attracted many stares through time, mainly due to the simplicity and computational efficiency of their learning algorithm.
This work addresses this issue by conducting an explainability study of Echo State Networks when applied to learning tasks with time series, image and video data.
Specifically, the study proposes three different techniques capable of eliciting understandable information about the knowledge grasped by these recurrent models.
arXiv Detail & Related papers (2021-02-17T08:56:33Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.