What went wrong and when? Instance-wise Feature Importance for
Time-series Models
- URL: http://arxiv.org/abs/2003.02821v3
- Date: Wed, 28 Oct 2020 17:23:35 GMT
- Title: What went wrong and when? Instance-wise Feature Importance for
Time-series Models
- Authors: Sana Tonekaboni, Shalmali Joshi, Kieran Campbell, David Duvenaud, Anna
Goldenberg
- Abstract summary: We propose a framework that evaluates the importance of observations for a time-series black-box model.
FIT defines the importance of an observation based on its contribution to the distributional shift.
We demonstrate the need to control for time-dependent distribution shifts.
- Score: 32.76628660490065
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explanations of time series models are useful for high stakes applications
like healthcare but have received little attention in machine learning
literature. We propose FIT, a framework that evaluates the importance of
observations for a multivariate time-series black-box model by quantifying the
shift in the predictive distribution over time. FIT defines the importance of
an observation based on its contribution to the distributional shift under a
KL-divergence that contrasts the predictive distribution against a
counterfactual where the rest of the features are unobserved. We also
demonstrate the need to control for time-dependent distribution shifts. We
compare with state-of-the-art baselines on simulated and real-world clinical
data and demonstrate that our approach is superior in identifying important
time points and observations throughout the time series.
Related papers
- A Survey on Diffusion Models for Time Series and Spatio-Temporal Data [92.1255811066468]
We review the use of diffusion models in time series and S-temporal data, categorizing them by model, task type, data modality, and practical application domain.
We categorize diffusion models into unconditioned and conditioned types discuss time series and S-temporal data separately.
Our survey covers their application extensively in various fields including healthcare, recommendation, climate, energy, audio, and transportation.
arXiv Detail & Related papers (2024-04-29T17:19:40Z) - Probing the Robustness of Time-series Forecasting Models with
CounterfacTS [1.823020744088554]
We present and publicly release CounterfacTS, a tool to probe the robustness of deep learning models in time-series forecasting tasks.
CounterfacTS has a user-friendly interface that allows the user to visualize, compare and quantify time series data and their forecasts.
arXiv Detail & Related papers (2024-03-06T07:34:47Z) - TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - Introducing the Attribution Stability Indicator: a Measure for Time
Series XAI Attributions [9.734940058811707]
We propose the Attribution Stability Indicator (ASI) to incorporate robustness and trustworthiness as properties of attribution techniques for time series into account.
We demonstrate the wanted properties based on an analysis of the attributions in a dimension-reduced space and the ASI scores distribution over three whole time series classification datasets.
arXiv Detail & Related papers (2023-10-06T11:48:26Z) - TACTiS: Transformer-Attentional Copulas for Time Series [76.71406465526454]
estimation of time-varying quantities is a fundamental component of decision making in fields such as healthcare and finance.
We propose a versatile method that estimates joint distributions using an attention-based decoder.
We show that our model produces state-of-the-art predictions on several real-world datasets.
arXiv Detail & Related papers (2022-02-07T21:37:29Z) - AutoFITS: Automatic Feature Engineering for Irregular Time Series [0.44198435146063353]
In irregular time series, the time at which each observation is collected may be helpful to summarise the dynamics of the data and improve forecasting performance.
We develop a novel automatic feature engineering framework, which focuses on extracting information from this point of view when each instance is collected.
We study how valuable this information is by integrating it in a time series forecasting workflow and investigate how it compares to or complements state-of-the-art methods for regular time series forecasting.
arXiv Detail & Related papers (2021-12-29T19:42:48Z) - Instance-wise Graph-based Framework for Multivariate Time Series
Forecasting [69.38716332931986]
We propose a simple yet efficient instance-wise graph-based framework to utilize the inter-dependencies of different variables at different time stamps.
The key idea of our framework is aggregating information from the historical time series of different variables to the current time series that we need to forecast.
arXiv Detail & Related papers (2021-09-14T07:38:35Z) - Temporal Dependencies in Feature Importance for Time Series Predictions [4.082348823209183]
We propose WinIT, a framework for evaluating feature importance in time series prediction settings.
We demonstrate how the solution improves the appropriate attribution of features within time steps.
WinIT achieves 2.47x better performance than FIT and other feature importance methods on real-world clinical MIMIC-mortality task.
arXiv Detail & Related papers (2021-07-29T20:31:03Z) - Model-Attentive Ensemble Learning for Sequence Modeling [86.4785354333566]
We present Model-Attentive Ensemble learning for Sequence modeling (MAES)
MAES is a mixture of time-series experts which leverages an attention-based gating mechanism to specialize the experts on different sequence dynamics and adaptively weight their predictions.
We demonstrate that MAES significantly out-performs popular sequence models on datasets subject to temporal shift.
arXiv Detail & Related papers (2021-02-23T05:23:35Z) - Learning Interpretable Deep State Space Model for Probabilistic Time
Series Forecasting [98.57851612518758]
Probabilistic time series forecasting involves estimating the distribution of future based on its history.
We propose a deep state space model for probabilistic time series forecasting whereby the non-linear emission model and transition model are parameterized by networks.
We show in experiments that our model produces accurate and sharp probabilistic forecasts.
arXiv Detail & Related papers (2021-01-31T06:49:33Z) - Spatiotemporal Attention for Multivariate Time Series Prediction and
Interpretation [17.568599402858037]
temporal attention mechanism (STAM) for simultaneous learning of the most important time steps and variables.
Results: STAM maintains state-of-the-art prediction accuracy while offering the benefit of accurate interpretability.
arXiv Detail & Related papers (2020-08-11T17:34:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.