timeXplain -- A Framework for Explaining the Predictions of Time Series
Classifiers
- URL: http://arxiv.org/abs/2007.07606v2
- Date: Mon, 20 Nov 2023 13:48:09 GMT
- Title: timeXplain -- A Framework for Explaining the Predictions of Time Series
Classifiers
- Authors: Felix Mujkanovic, Vanja Dosko\v{c}, Martin Schirneck, Patrick
Sch\"afer, Tobias Friedrich
- Abstract summary: We present novel domain mappings for the time domain, frequency domain, and time series statistics.
We analyze their explicative power as well as their limits.
We employ a novel evaluation metric to experimentally compare timeXplain to several model-specific explanation approaches.
- Score: 3.6433472230928428
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern time series classifiers display impressive predictive capabilities,
yet their decision-making processes mostly remain black boxes to the user. At
the same time, model-agnostic explainers, such as the recently proposed SHAP,
promise to make the predictions of machine learning models interpretable,
provided there are well-designed domain mappings. We bring both worlds together
in our timeXplain framework, extending the reach of explainable artificial
intelligence to time series classification and value prediction. We present
novel domain mappings for the time domain, frequency domain, and time series
statistics and analyze their explicative power as well as their limits. We
employ a novel evaluation metric to experimentally compare timeXplain to
several model-specific explanation approaches for state-of-the-art time series
classifiers.
Related papers
- XForecast: Evaluating Natural Language Explanations for Time Series Forecasting [72.57427992446698]
Time series forecasting aids decision-making, especially for stakeholders who rely on accurate predictions.
Traditional explainable AI (XAI) methods, which underline feature or temporal importance, often require expert knowledge.
evaluating forecast NLEs is difficult due to the complex causal relationships in time series data.
arXiv Detail & Related papers (2024-10-18T05:16:39Z) - Time is Not Enough: Time-Frequency based Explanation for Time-Series Black-Box Models [12.575427166236844]
We present Spectral eXplanation (SpectralX), an XAI framework that provides time-frequency explanations for time-series black-box classifiers.
We also introduce Feature Importance Approximations (FIA), a new perturbation-based XAI method.
arXiv Detail & Related papers (2024-08-07T08:51:10Z) - TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive
Learning [67.07363529640784]
We propose OpenSTL to categorize prevalent approaches into recurrent-based and recurrent-free models.
We conduct standard evaluations on datasets across various domains, including synthetic moving object trajectory, human motion, driving scenes, traffic flow and forecasting weather.
We find that recurrent-free models achieve a good balance between efficiency and performance than recurrent models.
arXiv Detail & Related papers (2023-06-20T03:02:14Z) - Self-Interpretable Time Series Prediction with Counterfactual
Explanations [4.658166900129066]
Interpretable time series prediction is crucial for safety-critical areas such as healthcare and autonomous driving.
Most existing methods focus on interpreting predictions by assigning important scores to segments of time series.
We develop a self-interpretable model, dubbed Counterfactual Time Series (CounTS), which generates counterfactual and actionable explanations for time series predictions.
arXiv Detail & Related papers (2023-06-09T16:42:52Z) - Encoding Time-Series Explanations through Self-Supervised Model Behavior
Consistency [26.99599329431296]
We present TimeX, a time series consistency model for training explainers.
TimeX trains an interpretable surrogate to mimic the behavior of a pretrained time series model.
We evaluate TimeX on eight synthetic and real-world datasets and compare its performance against state-of-the-art interpretability methods.
arXiv Detail & Related papers (2023-06-03T13:25:26Z) - TempSAL -- Uncovering Temporal Information for Deep Saliency Prediction [64.63645677568384]
We introduce a novel saliency prediction model that learns to output saliency maps in sequential time intervals.
Our approach locally modulates the saliency predictions by combining the learned temporal maps.
Our code will be publicly available on GitHub.
arXiv Detail & Related papers (2023-01-05T22:10:16Z) - VQ-AR: Vector Quantized Autoregressive Probabilistic Time Series
Forecasting [10.605719154114354]
Time series models aim for accurate predictions of the future given the past, where the forecasts are used for important downstream tasks like business decision making.
In this paper, we introduce a novel autoregressive architecture, VQ-AR, which instead learns a emphdiscrete set of representations that are used to predict the future.
arXiv Detail & Related papers (2022-05-31T15:43:46Z) - Instance-based Counterfactual Explanations for Time Series
Classification [11.215352918313577]
We advance a novel model-agnostic, case-based technique that generates counterfactual explanations for time series classifiers.
We show that Native Guide generates plausible, proximal, sparse and diverse explanations that are better than those produced by key benchmark counterfactual methods.
arXiv Detail & Related papers (2020-09-28T10:52:48Z) - Deducing neighborhoods of classes from a fitted model [68.8204255655161]
In this article a new kind of interpretable machine learning method is presented.
It can help to understand the partitioning of the feature space into predicted classes in a classification model using quantile shifts.
Basically, real data points (or specific points of interest) are used and the changes of the prediction after slightly raising or decreasing specific features are observed.
arXiv Detail & Related papers (2020-09-11T16:35:53Z) - Predicting Temporal Sets with Deep Neural Networks [50.53727580527024]
We propose an integrated solution based on the deep neural networks for temporal sets prediction.
A unique perspective is to learn element relationship by constructing set-level co-occurrence graph.
We design an attention-based module to adaptively learn the temporal dependency of elements and sets.
arXiv Detail & Related papers (2020-06-20T03:29:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.