Temporal Dependencies in Feature Importance for Time Series Predictions
- URL: http://arxiv.org/abs/2107.14317v1
- Date: Thu, 29 Jul 2021 20:31:03 GMT
- Title: Temporal Dependencies in Feature Importance for Time Series Predictions
- Authors: Clayton Rooke, Jonathan Smith, Kin Kwan Leung, Maksims Volkovs, Saba
Zuberi
- Abstract summary: We propose WinIT, a framework for evaluating feature importance in time series prediction settings.
We demonstrate how the solution improves the appropriate attribution of features within time steps.
WinIT achieves 2.47x better performance than FIT and other feature importance methods on real-world clinical MIMIC-mortality task.
- Score: 4.082348823209183
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explanation methods applied to sequential models for multivariate time series
prediction are receiving more attention in machine learning literature. While
current methods perform well at providing instance-wise explanations, they
struggle to efficiently and accurately make attributions over long periods of
time and with complex feature interactions. We propose WinIT, a framework for
evaluating feature importance in time series prediction settings by quantifying
the shift in predictive distribution over multiple instances in a windowed
setting. Comprehensive empirical evidence shows our method improves on the
previous state-of-the-art, FIT, by capturing temporal dependencies in feature
importance. We also demonstrate how the solution improves the appropriate
attribution of features within time steps, which existing interpretability
methods often fail to do. We compare with baselines on simulated and real-world
clinical data. WinIT achieves 2.47x better performance than FIT and other
feature importance methods on real-world clinical MIMIC-mortality task. The
code for this work is available at https://github.com/layer6ai-labs/WinIT.
Related papers
- MTSCI: A Conditional Diffusion Model for Multivariate Time Series Consistent Imputation [41.681869408967586]
Key research question is how to ensure imputation consistency, i.e., intra-consistency between observed and imputed values.
Previous methods rely solely on the inductive bias of the imputation targets to guide the learning process.
arXiv Detail & Related papers (2024-08-11T10:24:53Z) - TSI-Bench: Benchmarking Time Series Imputation [52.27004336123575]
TSI-Bench is a comprehensive benchmark suite for time series imputation utilizing deep learning techniques.
The TSI-Bench pipeline standardizes experimental settings to enable fair evaluation of imputation algorithms.
TSI-Bench innovatively provides a systematic paradigm to tailor time series forecasting algorithms for imputation purposes.
arXiv Detail & Related papers (2024-06-18T16:07:33Z) - Multi-Patch Prediction: Adapting LLMs for Time Series Representation
Learning [22.28251586213348]
aLLM4TS is an innovative framework that adapts Large Language Models (LLMs) for time-series representation learning.
A distinctive element of our framework is the patch-wise decoding layer, which departs from previous methods reliant on sequence-level decoding.
arXiv Detail & Related papers (2024-02-07T13:51:26Z) - TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - Time Associated Meta Learning for Clinical Prediction [78.99422473394029]
We propose a novel time associated meta learning (TAML) method to make effective predictions at multiple future time points.
To address the sparsity problem after task splitting, TAML employs a temporal information sharing strategy to augment the number of positive samples.
We demonstrate the effectiveness of TAML on multiple clinical datasets, where it consistently outperforms a range of strong baselines.
arXiv Detail & Related papers (2023-03-05T03:54:54Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - SAITS: Self-Attention-based Imputation for Time Series [6.321652307514677]
SAITS is a novel method based on the self-attention mechanism for missing value imputation in time series.
It learns missing values from a weighted combination of two diagonally-masked self-attention blocks.
Tests show SAITS outperforms state-of-the-art methods on the time-series imputation task efficiently.
arXiv Detail & Related papers (2022-02-17T08:40:42Z) - TACTiS: Transformer-Attentional Copulas for Time Series [76.71406465526454]
estimation of time-varying quantities is a fundamental component of decision making in fields such as healthcare and finance.
We propose a versatile method that estimates joint distributions using an attention-based decoder.
We show that our model produces state-of-the-art predictions on several real-world datasets.
arXiv Detail & Related papers (2022-02-07T21:37:29Z) - Mimic: An adaptive algorithm for multivariate time series classification [11.49627617337276]
Time series data are valuable but are often inscrutable.
Gaining trust in time series classifiers for finance, healthcare, and other critical applications may rely on creating interpretable models.
We propose a novel Mimic algorithm that retains the predictive accuracy of the strongest classifiers while introducing interpretability.
arXiv Detail & Related papers (2021-11-08T04:47:31Z) - Cluster-and-Conquer: A Framework For Time-Series Forecasting [94.63501563413725]
We propose a three-stage framework for forecasting high-dimensional time-series data.
Our framework is highly general, allowing for any time-series forecasting and clustering method to be used in each step.
When instantiated with simple linear autoregressive models, we are able to achieve state-of-the-art results on several benchmark datasets.
arXiv Detail & Related papers (2021-10-26T20:41:19Z) - Benchmarking Deep Learning Interpretability in Time Series Predictions [41.13847656750174]
Saliency methods are used extensively to highlight the importance of input features in model predictions.
We set out to extensively compare the performance of various saliency-based interpretability methods across diverse neural architectures.
arXiv Detail & Related papers (2020-10-26T22:07:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.