FLEXtime: Filterbank learning for explaining time series
- URL: http://arxiv.org/abs/2411.05841v1
- Date: Wed, 06 Nov 2024 15:06:42 GMT
- Title: FLEXtime: Filterbank learning for explaining time series
- Authors: Thea Brüsch, Kristoffer K. Wickstrøm, Mikkel N. Schmidt, Robert Jenssen, Tommy S. Alstrøm,
- Abstract summary: We propose a new method for time series explainability called FLEXtime.
It uses a filterbank to split the time series into frequency bands and learns the optimal combinations of these bands.
Our evaluation shows that FLEXtime on average outperforms state-of-the-art explainability methods across a range of datasets.
- Score: 10.706092195673257
- License:
- Abstract: State-of-the-art methods for explaining predictions based on time series are built on learning an instance-wise saliency mask for each time step. However, for many types of time series, the salient information is found in the frequency domain. Adopting existing methods to the frequency domain involves naively zeroing out frequency content in the signals, which goes against established signal processing theory. Therefore, we propose a new method entitled FLEXtime, that uses a filterbank to split the time series into frequency bands and learns the optimal combinations of these bands. FLEXtime avoids the drawbacks of zeroing out frequency bins and is more stable and easier to train compared to the naive method. Our extensive evaluation shows that FLEXtime on average outperforms state-of-the-art explainability methods across a range of datasets. FLEXtime fills an important gap in the time series explainability literature and can provide a valuable tool for a wide range of time series like EEG and audio.
Related papers
- FM-TS: Flow Matching for Time Series Generation [71.31148785577085]
We introduce FM-TS, a rectified Flow Matching-based framework for Time Series generation.
FM-TS is more efficient in terms of training and inference.
We have achieved superior performance in solar forecasting and MuJoCo imputation tasks.
arXiv Detail & Related papers (2024-11-12T03:03:23Z) - FilterNet: Harnessing Frequency Filters for Time Series Forecasting [34.83702192033196]
FilterNet is built upon our proposed learnable frequency filters to extract key informative temporal patterns by selectively passing or attenuating certain components of time series signals.
equipped with the two filters, FilterNet can approximately surrogate the linear and attention mappings widely adopted in time series literature.
arXiv Detail & Related papers (2024-11-03T16:20:41Z) - Omni-Dimensional Frequency Learner for General Time Series Analysis [12.473862872616998]
We present Omni-Dimensional Frequency Learner (ODFL) model based on a in depth analysis among all the three aspects of the spectrum feature.
Technically, our method is composed of a semantic-adaptive global filter with attention to the un-salient frequency bands and partial operation among the channel dimension.
ODFL achieves consistent state-of-the-art in five mainstream time series analysis tasks, including short- and long-term forecasting, imputation, classification, and anomaly detection.
arXiv Detail & Related papers (2024-07-15T03:48:16Z) - Deep Frequency Derivative Learning for Non-stationary Time Series Forecasting [12.989064148254936]
We present a deep frequency derivative learning framework, DERITS, for non-stationary time series forecasting.
Specifically, DERITS is built upon a novel reversible transformation, namely Frequency Derivative Transformation (FDT)
arXiv Detail & Related papers (2024-06-29T17:56:59Z) - Explaining time series models using frequency masking [10.706092195673255]
Time series data is important for describing many critical domains such as healthcare, finance, and climate.
Current methods for obtaining saliency maps assumes localized information in the raw input space.
We propose FreqRISE, which uses masking based methods to produce explanations in the frequency and time-frequency domain.
arXiv Detail & Related papers (2024-06-19T14:19:59Z) - ATFNet: Adaptive Time-Frequency Ensembled Network for Long-term Time Series Forecasting [7.694820760102176]
ATFNet is an innovative framework that combines a time domain module and a frequency domain module.
We introduce Dominant Harmonic Series Energy Weighting, a novel mechanism for adjusting the weights between the two modules.
Our Complex-valued Spectrum Attention mechanism offers a novel approach to discern the intricate relationships between different frequency combinations.
arXiv Detail & Related papers (2024-04-08T04:41:39Z) - Frequency-Aware Deepfake Detection: Improving Generalizability through
Frequency Space Learning [81.98675881423131]
This research addresses the challenge of developing a universal deepfake detector that can effectively identify unseen deepfake images.
Existing frequency-based paradigms have relied on frequency-level artifacts introduced during the up-sampling in GAN pipelines to detect forgeries.
We introduce a novel frequency-aware approach called FreqNet, centered around frequency domain learning, specifically designed to enhance the generalizability of deepfake detectors.
arXiv Detail & Related papers (2024-03-12T01:28:00Z) - HyperTime: Implicit Neural Representation for Time Series [131.57172578210256]
Implicit neural representations (INRs) have recently emerged as a powerful tool that provides an accurate and resolution-independent encoding of data.
In this paper, we analyze the representation of time series using INRs, comparing different activation functions in terms of reconstruction accuracy and training convergence speed.
We propose a hypernetwork architecture that leverages INRs to learn a compressed latent representation of an entire time series dataset.
arXiv Detail & Related papers (2022-08-11T14:05:51Z) - Adaptive Frequency Learning in Two-branch Face Forgery Detection [66.91715092251258]
We propose Adaptively learn Frequency information in the two-branch Detection framework, dubbed AFD.
We liberate our network from the fixed frequency transforms, and achieve better performance with our data- and task-dependent transform layers.
arXiv Detail & Related papers (2022-03-27T14:25:52Z) - Voice2Series: Reprogramming Acoustic Models for Time Series
Classification [65.94154001167608]
Voice2Series is a novel end-to-end approach that reprograms acoustic models for time series classification.
We show that V2S either outperforms or is tied with state-of-the-art methods on 20 tasks, and improves their average accuracy by 1.84%.
arXiv Detail & Related papers (2021-06-17T07:59:15Z) - Learning summary features of time series for likelihood free inference [93.08098361687722]
We present a data-driven strategy for automatically learning summary features from time series data.
Our results indicate that learning summary features from data can compete and even outperform LFI methods based on hand-crafted values.
arXiv Detail & Related papers (2020-12-04T19:21:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.