TsSHAP: Robust model agnostic feature-based explainability for time
series forecasting
- URL: http://arxiv.org/abs/2303.12316v1
- Date: Wed, 22 Mar 2023 05:14:36 GMT
- Title: TsSHAP: Robust model agnostic feature-based explainability for time
series forecasting
- Authors: Vikas C. Raykar, Arindam Jati, Sumanta Mukherjee, Nupur Aggarwal,
Kanthi Sarpatwar, Giridhar Ganapavarapu, Roman Vaculin
- Abstract summary: We propose a feature-based explainability algorithm, TsSHAP, that can explain the forecast of any black-box forecasting model.
We formalize the notion of local, semi-local, and global explanations in the context of time series forecasting.
- Score: 6.004928390125367
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A trustworthy machine learning model should be accurate as well as
explainable. Understanding why a model makes a certain decision defines the
notion of explainability. While various flavors of explainability have been
well-studied in supervised learning paradigms like classification and
regression, literature on explainability for time series forecasting is
relatively scarce.
In this paper, we propose a feature-based explainability algorithm, TsSHAP,
that can explain the forecast of any black-box forecasting model. The method is
agnostic of the forecasting model and can provide explanations for a forecast
in terms of interpretable features defined by the user a prior.
The explanations are in terms of the SHAP values obtained by applying the
TreeSHAP algorithm on a surrogate model that learns a mapping between the
interpretable feature space and the forecast of the black-box model.
Moreover, we formalize the notion of local, semi-local, and global
explanations in the context of time series forecasting, which can be useful in
several scenarios. We validate the efficacy and robustness of TsSHAP through
extensive experiments on multiple datasets.
Related papers
- XForecast: Evaluating Natural Language Explanations for Time Series Forecasting [72.57427992446698]
Time series forecasting aids decision-making, especially for stakeholders who rely on accurate predictions.
Traditional explainable AI (XAI) methods, which underline feature or temporal importance, often require expert knowledge.
evaluating forecast NLEs is difficult due to the complex causal relationships in time series data.
arXiv Detail & Related papers (2024-10-18T05:16:39Z) - LaPLACE: Probabilistic Local Model-Agnostic Causal Explanations [1.0370398945228227]
We introduce LaPLACE-explainer, designed to provide probabilistic cause-and-effect explanations for machine learning models.
The LaPLACE-Explainer component leverages the concept of a Markov blanket to establish statistical boundaries between relevant and non-relevant features.
Our approach offers causal explanations and outperforms LIME and SHAP in terms of local accuracy and consistency of explained features.
arXiv Detail & Related papers (2023-10-01T04:09:59Z) - Evaluating and Explaining Large Language Models for Code Using Syntactic
Structures [74.93762031957883]
This paper introduces ASTxplainer, an explainability method specific to Large Language Models for code.
At its core, ASTxplainer provides an automated method for aligning token predictions with AST nodes.
We perform an empirical evaluation on 12 popular LLMs for code using a curated dataset of the most popular GitHub projects.
arXiv Detail & Related papers (2023-08-07T18:50:57Z) - Grouping Shapley Value Feature Importances of Random Forests for
explainable Yield Prediction [0.8543936047647136]
We explain the concept of Shapley values directly computed for groups of features and introduce an algorithm to compute them efficiently on tree structures.
We provide a blueprint for designing swarm plots that combine many local explanations for global understanding.
arXiv Detail & Related papers (2023-04-14T13:03:33Z) - SurvSHAP(t): Time-dependent explanations of machine learning survival
models [6.950862982117125]
We introduce SurvSHAP(t), the first time-dependent explanation that allows for interpreting survival black-box models.
Experiments on synthetic and medical data confirm that SurvSHAP(t) can detect variables with a time-dependent effect.
We provide an accessible implementation of time-dependent explanations in Python.
arXiv Detail & Related papers (2022-08-23T17:01:14Z) - Why Did This Model Forecast This Future? Closed-Form Temporal Saliency
Towards Causal Explanations of Probabilistic Forecasts [20.442850522575213]
We build upon a general definition of information-theoretic saliency grounded in human perception.
We propose to express the saliency of an observed window in terms of the differential entropy of the resulting predicted future distribution.
We empirically demonstrate how our framework can recover salient observed windows from head pose features for the sample task of speaking-turn forecasting.
arXiv Detail & Related papers (2022-06-01T18:00:04Z) - Pathologies of Pre-trained Language Models in Few-shot Fine-tuning [50.3686606679048]
We show that pre-trained language models with few examples show strong prediction bias across labels.
Although few-shot fine-tuning can mitigate the prediction bias, our analysis shows models gain performance improvement by capturing non-task-related features.
These observations alert that pursuing model performance with fewer examples may incur pathological prediction behavior.
arXiv Detail & Related papers (2022-04-17T15:55:18Z) - Explainability in Process Outcome Prediction: Guidelines to Obtain
Interpretable and Faithful Models [77.34726150561087]
We define explainability through the interpretability of the explanations and the faithfulness of the explainability model in the field of process outcome prediction.
This paper contributes a set of guidelines named X-MOP which allows selecting the appropriate model based on the event log specifications.
arXiv Detail & Related papers (2022-03-30T05:59:50Z) - An Interpretable Probabilistic Model for Short-Term Solar Power
Forecasting Using Natural Gradient Boosting [0.0]
We propose a two stage probabilistic forecasting framework able to generate highly accurate, reliable, and sharp forecasts.
The framework offers full transparency on both the point forecasts and the prediction intervals (PIs)
To highlight the performance and the applicability of the proposed framework, real data from two PV parks located in Southern Germany are employed.
arXiv Detail & Related papers (2021-08-05T12:59:38Z) - Learning Interpretable Deep State Space Model for Probabilistic Time
Series Forecasting [98.57851612518758]
Probabilistic time series forecasting involves estimating the distribution of future based on its history.
We propose a deep state space model for probabilistic time series forecasting whereby the non-linear emission model and transition model are parameterized by networks.
We show in experiments that our model produces accurate and sharp probabilistic forecasts.
arXiv Detail & Related papers (2021-01-31T06:49:33Z) - Evaluations and Methods for Explanation through Robustness Analysis [117.7235152610957]
We establish a novel set of evaluation criteria for such feature based explanations by analysis.
We obtain new explanations that are loosely necessary and sufficient for a prediction.
We extend the explanation to extract the set of features that would move the current prediction to a target class.
arXiv Detail & Related papers (2020-05-31T05:52:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.