Encoding Time-Series Explanations through Self-Supervised Model Behavior
Consistency
- URL: http://arxiv.org/abs/2306.02109v2
- Date: Wed, 25 Oct 2023 01:59:27 GMT
- Title: Encoding Time-Series Explanations through Self-Supervised Model Behavior
Consistency
- Authors: Owen Queen, Thomas Hartvigsen, Teddy Koker, Huan He, Theodoros
Tsiligkaridis, Marinka Zitnik
- Abstract summary: We present TimeX, a time series consistency model for training explainers.
TimeX trains an interpretable surrogate to mimic the behavior of a pretrained time series model.
We evaluate TimeX on eight synthetic and real-world datasets and compare its performance against state-of-the-art interpretability methods.
- Score: 26.99599329431296
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Interpreting time series models is uniquely challenging because it requires
identifying both the location of time series signals that drive model
predictions and their matching to an interpretable temporal pattern. While
explainers from other modalities can be applied to time series, their inductive
biases do not transfer well to the inherently challenging interpretation of
time series. We present TimeX, a time series consistency model for training
explainers. TimeX trains an interpretable surrogate to mimic the behavior of a
pretrained time series model. It addresses the issue of model faithfulness by
introducing model behavior consistency, a novel formulation that preserves
relations in the latent space induced by the pretrained model with relations in
the latent space induced by TimeX. TimeX provides discrete attribution maps
and, unlike existing interpretability methods, it learns a latent space of
explanations that can be used in various ways, such as to provide landmarks to
visually aggregate similar explanations and easily recognize temporal patterns.
We evaluate TimeX on eight synthetic and real-world datasets and compare its
performance against state-of-the-art interpretability methods. We also conduct
case studies using physiological time series. Quantitative evaluations
demonstrate that TimeX achieves the highest or second-highest performance in
every metric compared to baselines across all datasets. Through case studies,
we show that the novel components of TimeX show potential for training
faithful, interpretable models that capture the behavior of pretrained time
series models.
Related papers
- Moirai-MoE: Empowering Time Series Foundation Models with Sparse Mixture of Experts [103.725112190618]
This paper introduces Moirai-MoE, using a single input/output projection layer while delegating the modeling of diverse time series patterns to the sparse mixture of experts.
Extensive experiments on 39 datasets demonstrate the superiority of Moirai-MoE over existing foundation models in both in-distribution and zero-shot scenarios.
arXiv Detail & Related papers (2024-10-14T13:01:11Z) - TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - On the Consistency and Robustness of Saliency Explanations for Time
Series Classification [4.062872727927056]
Saliency maps have been applied to interpret time series windows as images.
This paper extensively analyzes the consistency and robustness of saliency maps for time series features and temporal attribution.
arXiv Detail & Related papers (2023-09-04T09:08:22Z) - Self-Interpretable Time Series Prediction with Counterfactual
Explanations [4.658166900129066]
Interpretable time series prediction is crucial for safety-critical areas such as healthcare and autonomous driving.
Most existing methods focus on interpreting predictions by assigning important scores to segments of time series.
We develop a self-interpretable model, dubbed Counterfactual Time Series (CounTS), which generates counterfactual and actionable explanations for time series predictions.
arXiv Detail & Related papers (2023-06-09T16:42:52Z) - TempSAL -- Uncovering Temporal Information for Deep Saliency Prediction [64.63645677568384]
We introduce a novel saliency prediction model that learns to output saliency maps in sequential time intervals.
Our approach locally modulates the saliency predictions by combining the learned temporal maps.
Our code will be publicly available on GitHub.
arXiv Detail & Related papers (2023-01-05T22:10:16Z) - Learning to Reconstruct Missing Data from Spatiotemporal Graphs with
Sparse Observations [11.486068333583216]
This paper tackles the problem of learning effective models to reconstruct missing data points.
We propose a class of attention-based architectures, that given a set of highly sparse observations, learn a representation for points in time and space.
Compared to the state of the art, our model handles sparse data without propagating prediction errors or requiring a bidirectional model to encode forward and backward time dependencies.
arXiv Detail & Related papers (2022-05-26T16:40:48Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - Model-Attentive Ensemble Learning for Sequence Modeling [86.4785354333566]
We present Model-Attentive Ensemble learning for Sequence modeling (MAES)
MAES is a mixture of time-series experts which leverages an attention-based gating mechanism to specialize the experts on different sequence dynamics and adaptively weight their predictions.
We demonstrate that MAES significantly out-performs popular sequence models on datasets subject to temporal shift.
arXiv Detail & Related papers (2021-02-23T05:23:35Z) - Synergetic Learning of Heterogeneous Temporal Sequences for
Multi-Horizon Probabilistic Forecasting [48.8617204809538]
We propose Variational Synergetic Multi-Horizon Network (VSMHN), a novel deep conditional generative model.
To learn complex correlations across heterogeneous sequences, a tailored encoder is devised to combine the advances in deep point processes models and variational recurrent neural networks.
Our model can be trained effectively using variational inference and generates predictions with Monte-Carlo simulation.
arXiv Detail & Related papers (2021-01-31T11:00:55Z) - timeXplain -- A Framework for Explaining the Predictions of Time Series
Classifiers [3.6433472230928428]
We present novel domain mappings for the time domain, frequency domain, and time series statistics.
We analyze their explicative power as well as their limits.
We employ a novel evaluation metric to experimentally compare timeXplain to several model-specific explanation approaches.
arXiv Detail & Related papers (2020-07-15T10:32:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.