A Deep Dive into Perturbations as Evaluation Technique for Time Series
XAI
- URL: http://arxiv.org/abs/2307.05104v1
- Date: Tue, 11 Jul 2023 08:26:08 GMT
- Title: A Deep Dive into Perturbations as Evaluation Technique for Time Series
XAI
- Authors: Udo Schlegel, Daniel A. Keim
- Abstract summary: XAI for time series data has become increasingly important in finance, healthcare, and climate science.
evaluating the quality of explanations, such as attributions provided by XAI techniques, remains challenging.
This paper provides an in-depth analysis of using perturbations to evaluate attributions extracted from time series models.
- Score: 13.269396832189754
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explainable Artificial Intelligence (XAI) has gained significant attention
recently as the demand for transparency and interpretability of machine
learning models has increased. In particular, XAI for time series data has
become increasingly important in finance, healthcare, and climate science.
However, evaluating the quality of explanations, such as attributions provided
by XAI techniques, remains challenging. This paper provides an in-depth
analysis of using perturbations to evaluate attributions extracted from time
series models. A perturbation analysis involves systematically modifying the
input data and evaluating the impact on the attributions generated by the XAI
method. We apply this approach to several state-of-the-art XAI techniques and
evaluate their performance on three time series classification datasets. Our
results demonstrate that the perturbation analysis approach can effectively
evaluate the quality of attributions and provide insights into the strengths
and limitations of XAI techniques. Such an approach can guide the selection of
XAI methods for time series data, e.g., focusing on return time rather than
precision, and facilitate the development of more reliable and interpretable
machine learning models for time series analysis.
Related papers
- Underwater Object Detection in the Era of Artificial Intelligence: Current, Challenge, and Future [119.88454942558485]
Underwater object detection (UOD) aims to identify and localise objects in underwater images or videos.
In recent years, artificial intelligence (AI) based methods, especially deep learning methods, have shown promising performance in UOD.
arXiv Detail & Related papers (2024-10-08T00:25:33Z) - Unified Explanations in Machine Learning Models: A Perturbation Approach [0.0]
Inconsistencies between XAI and modeling techniques can have the undesirable effect of casting doubt upon the efficacy of these explainability approaches.
We propose a systematic, perturbation-based analysis against a popular, model-agnostic method in XAI, SHapley Additive exPlanations (Shap)
We devise algorithms to generate relative feature importance in settings of dynamic inference amongst a suite of popular machine learning and deep learning methods, and metrics that allow us to quantify how well explanations generated under the static case hold.
arXiv Detail & Related papers (2024-05-30T16:04:35Z) - EXACT: Towards a platform for empirically benchmarking Machine Learning model explanation methods [1.6383837447674294]
This paper brings together various benchmark datasets and novel performance metrics in an initial benchmarking platform.
Our datasets incorporate ground truth explanations for class-conditional features.
This platform assesses the performance of post-hoc XAI methods in the quality of the explanations they produce.
arXiv Detail & Related papers (2024-05-20T14:16:06Z) - XAI-TRIS: Non-linear image benchmarks to quantify false positive
post-hoc attribution of feature importance [1.3958169829527285]
A lack of formal underpinning leaves it unclear as to what conclusions can safely be drawn from the results of a given XAI method.
This means that challenging non-linear problems, typically solved by deep neural networks, presently lack appropriate remedies.
We show that popular XAI methods are often unable to significantly outperform random performance baselines and edge detection methods.
arXiv Detail & Related papers (2023-06-22T11:31:11Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - Temporal Relevance Analysis for Video Action Models [70.39411261685963]
We first propose a new approach to quantify the temporal relationships between frames captured by CNN-based action models.
We then conduct comprehensive experiments and in-depth analysis to provide a better understanding of how temporal modeling is affected.
arXiv Detail & Related papers (2022-04-25T19:06:48Z) - Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - XAI Methods for Neural Time Series Classification: A Brief Review [0.0]
We examine the current state of eXplainable AI (XAI) methods with a focus on approaches for opening up deep learning black boxes for the task of time series classification.
Our contribution also aims at deriving promising directions for future work, to advance XAI for deep learning on time series data.
arXiv Detail & Related papers (2021-08-18T07:26:19Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - An Empirical Study of Explainable AI Techniques on Deep Learning Models
For Time Series Tasks [18.70973390984415]
Decision explanations of machine learning black-box models are often generated by applying Explainable AI (XAI) techniques.
Evaluation and verification are usually achieved with a visual interpretation by humans on individual images or text.
We propose an empirical study and benchmark framework to apply attribution methods for neural networks developed for images and text data on time series.
arXiv Detail & Related papers (2020-12-08T10:33:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.