Evaluation of Local Explanation Methods for Multivariate Time Series
Forecasting
- URL: http://arxiv.org/abs/2009.09092v1
- Date: Fri, 18 Sep 2020 21:15:28 GMT
- Title: Evaluation of Local Explanation Methods for Multivariate Time Series
Forecasting
- Authors: Ozan Ozyegen and Igor Ilic and Mucahit Cevik
- Abstract summary: Local interpretability is important in determining why a model makes particular predictions.
Despite the recent focus on AI interpretability, there has been a lack of research in local interpretability methods for time series forecasting.
- Score: 0.21094707683348418
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Being able to interpret a machine learning model is a crucial task in many
applications of machine learning. Specifically, local interpretability is
important in determining why a model makes particular predictions. Despite the
recent focus on AI interpretability, there has been a lack of research in local
interpretability methods for time series forecasting while the few
interpretable methods that exist mainly focus on time series classification
tasks. In this study, we propose two novel evaluation metrics for time series
forecasting: Area Over the Perturbation Curve for Regression and Ablation
Percentage Threshold. These two metrics can measure the local fidelity of local
explanation models. We extend the theoretical foundation to collect
experimental results on two popular datasets, \textit{Rossmann sales} and
\textit{electricity}. Both metrics enable a comprehensive comparison of
numerous local explanation models and find which metrics are more sensitive.
Lastly, we provide heuristical reasoning for this analysis.
Related papers
- Context is Key: A Benchmark for Forecasting with Essential Textual Information [87.3175915185287]
"Context is Key" (CiK) is a time series forecasting benchmark that pairs numerical data with diverse types of carefully crafted textual context.
We evaluate a range of approaches, including statistical models, time series foundation models, and LLM-based forecasters.
Our experiments highlight the importance of incorporating contextual information, demonstrate surprising performance when using LLM-based forecasting models, and also reveal some of their critical shortcomings.
arXiv Detail & Related papers (2024-10-24T17:56:08Z) - XForecast: Evaluating Natural Language Explanations for Time Series Forecasting [72.57427992446698]
Time series forecasting aids decision-making, especially for stakeholders who rely on accurate predictions.
Traditional explainable AI (XAI) methods, which underline feature or temporal importance, often require expert knowledge.
evaluating forecast NLEs is difficult due to the complex causal relationships in time series data.
arXiv Detail & Related papers (2024-10-18T05:16:39Z) - DANLIP: Deep Autoregressive Networks for Locally Interpretable
Probabilistic Forecasting [0.0]
We propose a novel deep learning-based probabilistic time series forecasting architecture that is intrinsically interpretable.
We show that our model is not only interpretable but also provides comparable performance to state-of-the-art probabilistic time series forecasting methods.
arXiv Detail & Related papers (2023-01-05T23:40:23Z) - An Additive Instance-Wise Approach to Multi-class Model Interpretation [53.87578024052922]
Interpretable machine learning offers insights into what factors drive a certain prediction of a black-box system.
Existing methods mainly focus on selecting explanatory input features, which follow either locally additive or instance-wise approaches.
This work exploits the strengths of both methods and proposes a global framework for learning local explanations simultaneously for multiple target classes.
arXiv Detail & Related papers (2022-07-07T06:50:27Z) - A Closer Look at Debiased Temporal Sentence Grounding in Videos:
Dataset, Metric, and Approach [53.727460222955266]
Temporal Sentence Grounding in Videos (TSGV) aims to ground a natural language sentence in an untrimmed video.
Recent studies have found that current benchmark datasets may have obvious moment annotation biases.
We introduce a new evaluation metric "dR@n,IoU@m" that discounts the basic recall scores to alleviate the inflating evaluation caused by biased datasets.
arXiv Detail & Related papers (2022-03-10T08:58:18Z) - TACTiS: Transformer-Attentional Copulas for Time Series [76.71406465526454]
estimation of time-varying quantities is a fundamental component of decision making in fields such as healthcare and finance.
We propose a versatile method that estimates joint distributions using an attention-based decoder.
We show that our model produces state-of-the-art predictions on several real-world datasets.
arXiv Detail & Related papers (2022-02-07T21:37:29Z) - Deducing neighborhoods of classes from a fitted model [68.8204255655161]
In this article a new kind of interpretable machine learning method is presented.
It can help to understand the partitioning of the feature space into predicted classes in a classification model using quantile shifts.
Basically, real data points (or specific points of interest) are used and the changes of the prediction after slightly raising or decreasing specific features are observed.
arXiv Detail & Related papers (2020-09-11T16:35:53Z) - Spatiotemporal Attention for Multivariate Time Series Prediction and
Interpretation [17.568599402858037]
temporal attention mechanism (STAM) for simultaneous learning of the most important time steps and variables.
Results: STAM maintains state-of-the-art prediction accuracy while offering the benefit of accurate interpretability.
arXiv Detail & Related papers (2020-08-11T17:34:55Z) - timeXplain -- A Framework for Explaining the Predictions of Time Series
Classifiers [3.6433472230928428]
We present novel domain mappings for the time domain, frequency domain, and time series statistics.
We analyze their explicative power as well as their limits.
We employ a novel evaluation metric to experimentally compare timeXplain to several model-specific explanation approaches.
arXiv Detail & Related papers (2020-07-15T10:32:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.