Explaining time series models using frequency masking
- URL: http://arxiv.org/abs/2406.13584v1
- Date: Wed, 19 Jun 2024 14:19:59 GMT
- Title: Explaining time series models using frequency masking
- Authors: Thea Brüsch, Kristoffer K. Wickstrøm, Mikkel N. Schmidt, Tommy S. Alstrøm, Robert Jenssen,
- Abstract summary: Time series data is important for describing many critical domains such as healthcare, finance, and climate.
Current methods for obtaining saliency maps assumes localized information in the raw input space.
We propose FreqRISE, which uses masking based methods to produce explanations in the frequency and time-frequency domain.
- Score: 10.706092195673255
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Time series data is fundamentally important for describing many critical domains such as healthcare, finance, and climate, where explainable models are necessary for safe automated decision-making. To develop eXplainable AI (XAI) in these domains therefore implies explaining salient information in the time series. Current methods for obtaining saliency maps assumes localized information in the raw input space. In this paper, we argue that the salient information of a number of time series is more likely to be localized in the frequency domain. We propose FreqRISE, which uses masking based methods to produce explanations in the frequency and time-frequency domain, which shows the best performance across a number of tasks.
Related papers
- FLEXtime: Filterbank learning for explaining time series [10.706092195673257]
We propose a new method for time series explainability called FLEXtime.
It uses a filterbank to split the time series into frequency bands and learns the optimal combinations of these bands.
Our evaluation shows that FLEXtime on average outperforms state-of-the-art explainability methods across a range of datasets.
arXiv Detail & Related papers (2024-11-06T15:06:42Z) - Time is Not Enough: Time-Frequency based Explanation for Time-Series Black-Box Models [12.575427166236844]
We present Spectral eXplanation (SpectralX), an XAI framework that provides time-frequency explanations for time-series black-box classifiers.
We also introduce Feature Importance Approximations (FIA), a new perturbation-based XAI method.
arXiv Detail & Related papers (2024-08-07T08:51:10Z) - ROSE: Register Assisted General Time Series Forecasting with Decomposed Frequency Learning [17.734609093955374]
We propose a Register Assisted General Time Series Forecasting Model with Decomposed Frequency Learning (ROSE)
ROSE employs Decomposed Frequency Learning for the pre-training task, which decomposes coupled semantic and periodic information in time series with frequency-based masking and reconstruction to obtain unified representations across domains.
After pre-training on large-scale time series data, ROSE achieves state-of-the-art forecasting performance on 8 real-world benchmarks.
arXiv Detail & Related papers (2024-05-24T06:01:09Z) - Time Series Diffusion in the Frequency Domain [54.60573052311487]
We analyze whether representing time series in the frequency domain is a useful inductive bias for score-based diffusion models.
We show that a dual diffusion process occurs in the frequency domain with an important nuance.
We show how to adapt the denoising score matching approach to implement diffusion models in the frequency domain.
arXiv Detail & Related papers (2024-02-08T18:59:05Z) - HyperTime: Implicit Neural Representation for Time Series [131.57172578210256]
Implicit neural representations (INRs) have recently emerged as a powerful tool that provides an accurate and resolution-independent encoding of data.
In this paper, we analyze the representation of time series using INRs, comparing different activation functions in terms of reconstruction accuracy and training convergence speed.
We propose a hypernetwork architecture that leverages INRs to learn a compressed latent representation of an entire time series dataset.
arXiv Detail & Related papers (2022-08-11T14:05:51Z) - Time Series Analysis via Network Science: Concepts and Algorithms [62.997667081978825]
This review provides a comprehensive overview of existing mapping methods for transforming time series into networks.
We describe the main conceptual approaches, provide authoritative references and give insight into their advantages and limitations in a unified notation and language.
Although still very recent, this research area has much potential and with this survey we intend to pave the way for future research on the topic.
arXiv Detail & Related papers (2021-10-11T13:33:18Z) - Deep Autoregressive Models with Spectral Attention [74.08846528440024]
We propose a forecasting architecture that combines deep autoregressive models with a Spectral Attention (SA) module.
By characterizing in the spectral domain the embedding of the time series as occurrences of a random process, our method can identify global trends and seasonality patterns.
Two spectral attention models, global and local to the time series, integrate this information within the forecast and perform spectral filtering to remove time series's noise.
arXiv Detail & Related papers (2021-07-13T11:08:47Z) - Voice2Series: Reprogramming Acoustic Models for Time Series
Classification [65.94154001167608]
Voice2Series is a novel end-to-end approach that reprograms acoustic models for time series classification.
We show that V2S either outperforms or is tied with state-of-the-art methods on 20 tasks, and improves their average accuracy by 1.84%.
arXiv Detail & Related papers (2021-06-17T07:59:15Z) - Explaining Time Series Predictions with Dynamic Masks [91.3755431537592]
We propose dynamic masks (Dynamask) to explain predictions of a machine learning model.
With synthetic and real-world data, we demonstrate that the dynamic underpinning of Dynamask, together with its parsimony, offer a neat improvement in the identification of feature importance over time.
The modularity of Dynamask makes it ideal as a plug-in to increase the transparency of a wide range of machine learning models in areas such as medicine and finance.
arXiv Detail & Related papers (2021-06-09T18:01:09Z) - Unsupervised Representation Learning for Time Series with Temporal
Neighborhood Coding [8.45908939323268]
We propose a self-supervised framework for learning generalizable representations for non-stationary time series.
Our motivation stems from the medical field, where the ability to model the dynamic nature of time series data is especially valuable.
arXiv Detail & Related papers (2021-06-01T19:53:24Z) - Deep learning for time series classification [2.0305676256390934]
Time series analysis allows us to visualize and understand the evolution of a process over time.
Time series classification consists of constructing algorithms dedicated to automatically label time series data.
Deep learning has emerged as one of the most effective methods for tackling the supervised classification task.
arXiv Detail & Related papers (2020-10-01T17:38:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.