Context Neural Networks: A Scalable Multivariate Model for Time Series Forecasting
- URL: http://arxiv.org/abs/2405.07117v1
- Date: Sun, 12 May 2024 00:21:57 GMT
- Title: Context Neural Networks: A Scalable Multivariate Model for Time Series Forecasting
- Authors: Abishek Sriramulu, Christoph Bergmeir, Slawek Smyl,
- Abstract summary: Real-world time series often exhibit complex interdependencies that cannot be captured in isolation.
This paper introduces the Context Neural Network, an efficient linear complexity approach for augmenting time series models with relevant contextual insights.
- Score: 5.5711773076846365
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Real-world time series often exhibit complex interdependencies that cannot be captured in isolation. Global models that model past data from multiple related time series globally while producing series-specific forecasts locally are now common. However, their forecasts for each individual series remain isolated, failing to account for the current state of its neighbouring series. Multivariate models like multivariate attention and graph neural networks can explicitly incorporate inter-series information, thus addressing the shortcomings of global models. However, these techniques exhibit quadratic complexity per timestep, limiting scalability. This paper introduces the Context Neural Network, an efficient linear complexity approach for augmenting time series models with relevant contextual insights from neighbouring time series without significant computational overhead. The proposed method enriches predictive models by providing the target series with real-time information from its neighbours, addressing the limitations of global models, yet remaining computationally tractable for large datasets.
Related papers
- Tackling Data Heterogeneity in Federated Time Series Forecasting [61.021413959988216]
Time series forecasting plays a critical role in various real-world applications, including energy consumption prediction, disease transmission monitoring, and weather forecasting.
Most existing methods rely on a centralized training paradigm, where large amounts of data are collected from distributed devices to a central cloud server.
We propose a novel framework, Fed-TREND, to address data heterogeneity by generating informative synthetic data as auxiliary knowledge carriers.
arXiv Detail & Related papers (2024-11-24T04:56:45Z) - Adaptive Convolutional Forecasting Network Based on Time Series Feature-Driven [9.133955922897371]
Time series data in real-world scenarios contain a substantial amount of nonlinear information.
We introduce multi-resolution convolution and deformable convolution operations.
We propose ACNet, an adaptive convolutional network designed to effectively model the local and global temporal dependencies.
arXiv Detail & Related papers (2024-05-20T14:05:35Z) - GinAR: An End-To-End Multivariate Time Series Forecasting Model Suitable for Variable Missing [21.980379175333443]
We propose a novel Graph Interpolation Attention Recursive Network (named GinAR) to model the spatial-temporal dependencies over the limited collected data for forecasting.
In GinAR, it consists of two key components, that is, attention and adaptive graph convolution.
Experiments conducted on five real-world datasets demonstrate that GinAR outperforms 11 SOTA baselines, and even when 90% of variables are missing, it can still accurately predict the future values of all variables.
arXiv Detail & Related papers (2024-05-18T16:42:44Z) - Time Series Data Augmentation as an Imbalanced Learning Problem [2.5536554335016417]
We use oversampling strategies to create synthetic time series observations and improve the accuracy of forecasting models.
We carried out experiments using 7 different databases that contain a total of 5502 univariate time series.
We found that the proposed solution outperforms both a global and a local model, thus providing a better trade-off between these two approaches.
arXiv Detail & Related papers (2024-04-29T09:27:15Z) - Unified Training of Universal Time Series Forecasting Transformers [104.56318980466742]
We present a Masked-based Universal Time Series Forecasting Transformer (Moirai)
Moirai is trained on our newly introduced Large-scale Open Time Series Archive (LOTSA) featuring over 27B observations across nine domains.
Moirai achieves competitive or superior performance as a zero-shot forecaster when compared to full-shot models.
arXiv Detail & Related papers (2024-02-04T20:00:45Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - Deep Autoregressive Models with Spectral Attention [74.08846528440024]
We propose a forecasting architecture that combines deep autoregressive models with a Spectral Attention (SA) module.
By characterizing in the spectral domain the embedding of the time series as occurrences of a random process, our method can identify global trends and seasonality patterns.
Two spectral attention models, global and local to the time series, integrate this information within the forecast and perform spectral filtering to remove time series's noise.
arXiv Detail & Related papers (2021-07-13T11:08:47Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Global Models for Time Series Forecasting: A Simulation Study [2.580765958706854]
We simulate time series from simple data generating processes (DGP), such as Auto Regressive (AR) and Seasonal AR, to complex DGPs, such as Chaotic Logistic Map, Self-Exciting Threshold Auto-Regressive, and Mackey-Glass equations.
The lengths and the number of series in the dataset are varied in different scenarios.
We perform experiments on these datasets using global forecasting models including Recurrent Neural Networks (RNN), Feed-Forward Neural Networks, Pooled Regression (PR) models, and Light Gradient Boosting Models (LGBM)
arXiv Detail & Related papers (2020-12-23T04:45:52Z) - Connecting the Dots: Multivariate Time Series Forecasting with Graph
Neural Networks [91.65637773358347]
We propose a general graph neural network framework designed specifically for multivariate time series data.
Our approach automatically extracts the uni-directed relations among variables through a graph learning module.
Our proposed model outperforms the state-of-the-art baseline methods on 3 of 4 benchmark datasets.
arXiv Detail & Related papers (2020-05-24T04:02:18Z) - Multivariate Probabilistic Time Series Forecasting via Conditioned
Normalizing Flows [8.859284959951204]
Time series forecasting is fundamental to scientific and engineering problems.
Deep learning methods are well suited for this problem.
We show that it improves over the state-of-the-art for standard metrics on many real-world data sets.
arXiv Detail & Related papers (2020-02-14T16:16:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.