DeformTime: Capturing Variable Dependencies with Deformable Attention for Time Series Forecasting
- URL: http://arxiv.org/abs/2406.07438v2
- Date: Tue, 18 Jun 2024 17:42:52 GMT
- Title: DeformTime: Capturing Variable Dependencies with Deformable Attention for Time Series Forecasting
- Authors: Yuxuan Shu, Vasileios Lampos,
- Abstract summary: We present DeformTime, a neural network architecture that attempts to capture correlated temporal patterns from the input space.
We conduct extensive experiments on 6 MTS data sets, using previously established benchmarks as well as challenging infectious disease modelling tasks.
Results demonstrate that DeformTime improves accuracy against previous competitive methods across the vast majority of MTS forecasting tasks.
- Score: 0.34530027457862006
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In multivariate time series (MTS) forecasting, existing state-of-the-art deep learning approaches tend to focus on autoregressive formulations and overlook the information within exogenous indicators. To address this limitation, we present DeformTime, a neural network architecture that attempts to capture correlated temporal patterns from the input space, and hence, improve forecasting accuracy. It deploys two core operations performed by deformable attention blocks (DABs): learning dependencies across variables from different time steps (variable DAB), and preserving temporal dependencies in data from previous time steps (temporal DAB). Input data transformation is explicitly designed to enhance learning from the deformed series of information while passing through a DAB. We conduct extensive experiments on 6 MTS data sets, using previously established benchmarks as well as challenging infectious disease modelling tasks with more exogenous variables. The results demonstrate that DeformTime improves accuracy against previous competitive methods across the vast majority of MTS forecasting tasks, reducing the mean absolute error by 10% on average. Notably, performance gains remain consistent across longer forecasting horizons.
Related papers
- Is Precise Recovery Necessary? A Task-Oriented Imputation Approach for Time Series Forecasting on Variable Subset [27.180618587832463]
We propose Task-Oriented Imputation for Variable Subset Forecasting (TOI-VSF) for time series forecasting.
TOI-VSF incorporates a self-supervised imputation module, agnostic to the forecasting model, designed to fill in missing variables.
Extensive experiments across four datasets demonstrate the superiority of TOI-VSF, outperforming baseline methods by $15%$ on average.
arXiv Detail & Related papers (2024-11-15T04:00:54Z) - Probing the Robustness of Time-series Forecasting Models with
CounterfacTS [1.823020744088554]
We present and publicly release CounterfacTS, a tool to probe the robustness of deep learning models in time-series forecasting tasks.
CounterfacTS has a user-friendly interface that allows the user to visualize, compare and quantify time series data and their forecasts.
arXiv Detail & Related papers (2024-03-06T07:34:47Z) - Graph Spatiotemporal Process for Multivariate Time Series Anomaly
Detection with Missing Values [67.76168547245237]
We introduce a novel framework called GST-Pro, which utilizes a graphtemporal process and anomaly scorer to detect anomalies.
Our experimental results show that the GST-Pro method can effectively detect anomalies in time series data and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2024-01-11T10:10:16Z) - FrAug: Frequency Domain Augmentation for Time Series Forecasting [6.508992154478217]
Data augmentation (DA) has become a de facto solution to expand training data size for deep learning.
This paper proposes simple yet effective frequency domain augmentation techniques that ensure the semantic consistency of augmented data-label pairs in forecasting.
Our results show that FrAug can boost the forecasting accuracy of TSF models in most cases.
arXiv Detail & Related papers (2023-02-18T11:25:42Z) - Generative Time Series Forecasting with Diffusion, Denoise, and
Disentanglement [51.55157852647306]
Time series forecasting has been a widely explored task of great importance in many applications.
It is common that real-world time series data are recorded in a short time period, which results in a big gap between the deep model and the limited and noisy time series.
We propose to address the time series forecasting problem with generative modeling and propose a bidirectional variational auto-encoder equipped with diffusion, denoise, and disentanglement.
arXiv Detail & Related papers (2023-01-08T12:20:46Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - SAITS: Self-Attention-based Imputation for Time Series [6.321652307514677]
SAITS is a novel method based on the self-attention mechanism for missing value imputation in time series.
It learns missing values from a weighted combination of two diagonally-masked self-attention blocks.
Tests show SAITS outperforms state-of-the-art methods on the time-series imputation task efficiently.
arXiv Detail & Related papers (2022-02-17T08:40:42Z) - TACTiS: Transformer-Attentional Copulas for Time Series [76.71406465526454]
estimation of time-varying quantities is a fundamental component of decision making in fields such as healthcare and finance.
We propose a versatile method that estimates joint distributions using an attention-based decoder.
We show that our model produces state-of-the-art predictions on several real-world datasets.
arXiv Detail & Related papers (2022-02-07T21:37:29Z) - Spatiotemporal Attention for Multivariate Time Series Prediction and
Interpretation [17.568599402858037]
temporal attention mechanism (STAM) for simultaneous learning of the most important time steps and variables.
Results: STAM maintains state-of-the-art prediction accuracy while offering the benefit of accurate interpretability.
arXiv Detail & Related papers (2020-08-11T17:34:55Z) - Evaluating Prediction-Time Batch Normalization for Robustness under
Covariate Shift [81.74795324629712]
We call prediction-time batch normalization, which significantly improves model accuracy and calibration under covariate shift.
We show that prediction-time batch normalization provides complementary benefits to existing state-of-the-art approaches for improving robustness.
The method has mixed results when used alongside pre-training, and does not seem to perform as well under more natural types of dataset shift.
arXiv Detail & Related papers (2020-06-19T05:08:43Z) - Transformer Hawkes Process [79.16290557505211]
We propose a Transformer Hawkes Process (THP) model, which leverages the self-attention mechanism to capture long-term dependencies.
THP outperforms existing models in terms of both likelihood and event prediction accuracy by a notable margin.
We provide a concrete example, where THP achieves improved prediction performance for learning multiple point processes when incorporating their relational information.
arXiv Detail & Related papers (2020-02-21T13:48:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.