Introducing the Attribution Stability Indicator: a Measure for Time
Series XAI Attributions
- URL: http://arxiv.org/abs/2310.04178v1
- Date: Fri, 6 Oct 2023 11:48:26 GMT
- Title: Introducing the Attribution Stability Indicator: a Measure for Time
Series XAI Attributions
- Authors: Udo Schlegel, Daniel A. Keim
- Abstract summary: We propose the Attribution Stability Indicator (ASI) to incorporate robustness and trustworthiness as properties of attribution techniques for time series into account.
We demonstrate the wanted properties based on an analysis of the attributions in a dimension-reduced space and the ASI scores distribution over three whole time series classification datasets.
- Score: 9.734940058811707
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Given the increasing amount and general complexity of time series data in
domains such as finance, weather forecasting, and healthcare, there is a
growing need for state-of-the-art performance models that can provide
interpretable insights into underlying patterns and relationships. Attribution
techniques enable the extraction of explanations from time series models to
gain insights but are hard to evaluate for their robustness and
trustworthiness. We propose the Attribution Stability Indicator (ASI), a
measure to incorporate robustness and trustworthiness as properties of
attribution techniques for time series into account. We extend a perturbation
analysis with correlations of the original time series to the perturbed
instance and the attributions to include wanted properties in the measure. We
demonstrate the wanted properties based on an analysis of the attributions in a
dimension-reduced space and the ASI scores distribution over three whole time
series classification datasets.
Related papers
- TimeInf: Time Series Data Contribution via Influence Functions [8.018453062120916]
TimeInf is a data contribution estimation method for time-series datasets.
Our empirical results demonstrate that TimeInf outperforms state-of-the-art methods in identifying harmful anomalies.
TimeInf offers intuitive and interpretable attributions of data values, allowing us to easily distinguish diverse anomaly patterns through visualizations.
arXiv Detail & Related papers (2024-07-21T19:10:40Z) - Learning Graph Structures and Uncertainty for Accurate and Calibrated Time-series Forecasting [65.40983982856056]
We introduce STOIC, that leverages correlations between time-series to learn underlying structure between time-series and to provide well-calibrated and accurate forecasts.
Over a wide-range of benchmark datasets STOIC provides 16% more accurate and better-calibrated forecasts.
arXiv Detail & Related papers (2024-07-02T20:14:32Z) - TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - ChiroDiff: Modelling chirographic data with Diffusion Models [132.5223191478268]
We introduce a powerful model-class namely "Denoising Diffusion Probabilistic Models" or DDPMs for chirographic data.
Our model named "ChiroDiff", being non-autoregressive, learns to capture holistic concepts and therefore remains resilient to higher temporal sampling rate.
arXiv Detail & Related papers (2023-04-07T15:17:48Z) - Grouped self-attention mechanism for a memory-efficient Transformer [64.0125322353281]
Real-world tasks such as forecasting weather, electricity consumption, and stock market involve predicting data that vary over time.
Time-series data are generally recorded over a long period of observation with long sequences owing to their periodic characteristics and long-range dependencies over time.
We propose two novel modules, Grouped Self-Attention (GSA) and Compressed Cross-Attention (CCA)
Our proposed model efficiently exhibited reduced computational complexity and performance comparable to or better than existing methods.
arXiv Detail & Related papers (2022-10-02T06:58:49Z) - TACTiS: Transformer-Attentional Copulas for Time Series [76.71406465526454]
estimation of time-varying quantities is a fundamental component of decision making in fields such as healthcare and finance.
We propose a versatile method that estimates joint distributions using an attention-based decoder.
We show that our model produces state-of-the-art predictions on several real-world datasets.
arXiv Detail & Related papers (2022-02-07T21:37:29Z) - Massive feature extraction for explaining and foretelling hydroclimatic
time series forecastability at the global scale [0.0]
We study the relationships between descriptive time series features and actual time series forecastability.
We apply this framework to three global datasets.
We find that this forecastability in terms of Nash-Sutcliffe efficiency is strongly related to several descriptive features.
arXiv Detail & Related papers (2021-07-25T19:15:19Z) - Interpretable Time-series Representation Learning With Multi-Level
Disentanglement [56.38489708031278]
Disentangle Time Series (DTS) is a novel disentanglement enhancement framework for sequential data.
DTS generates hierarchical semantic concepts as the interpretable and disentangled representation of time-series.
DTS achieves superior performance in downstream applications, with high interpretability of semantic concepts.
arXiv Detail & Related papers (2021-05-17T22:02:24Z) - Spatiotemporal Attention for Multivariate Time Series Prediction and
Interpretation [17.568599402858037]
temporal attention mechanism (STAM) for simultaneous learning of the most important time steps and variables.
Results: STAM maintains state-of-the-art prediction accuracy while offering the benefit of accurate interpretability.
arXiv Detail & Related papers (2020-08-11T17:34:55Z) - What went wrong and when? Instance-wise Feature Importance for
Time-series Models [32.76628660490065]
We propose a framework that evaluates the importance of observations for a time-series black-box model.
FIT defines the importance of an observation based on its contribution to the distributional shift.
We demonstrate the need to control for time-dependent distribution shifts.
arXiv Detail & Related papers (2020-03-05T18:45:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.