Evaluating the effectiveness of predicting covariates in LSTM Networks for Time Series Forecasting
- URL: http://arxiv.org/abs/2404.18553v1
- Date: Mon, 29 Apr 2024 09:51:25 GMT
- Title: Evaluating the effectiveness of predicting covariates in LSTM Networks for Time Series Forecasting
- Authors: Gareth Davies,
- Abstract summary: We introduce a novel approach using seasonal time segments in combination with an RNN architecture, which is both simple and extremely effective over long forecast horizons.
Our findings reveal that under certain conditions jointly training covariates with target variables can improve overall performance of the model.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autoregressive Recurrent Neural Networks are widely employed in time-series forecasting tasks, demonstrating effectiveness in univariate and certain multivariate scenarios. However, their inherent structure does not readily accommodate the integration of future, time-dependent covariates. A proposed solution, outlined by Salinas et al 2019, suggests forecasting both covariates and the target variable in a multivariate framework. In this study, we conducted comprehensive tests on publicly available time-series datasets, artificially introducing highly correlated covariates to future time-step values. Our evaluation aimed to assess the performance of an LSTM network when considering these covariates and compare it against a univariate baseline. As part of this study we introduce a novel approach using seasonal time segments in combination with an RNN architecture, which is both simple and extremely effective over long forecast horizons with comparable performance to many state of the art architectures. Our findings from the results of more than 120 models reveal that under certain conditions jointly training covariates with target variables can improve overall performance of the model, but often there exists a significant performance disparity between multivariate and univariate predictions. Surprisingly, even when provided with covariates informing the network about future target values, multivariate predictions exhibited inferior performance. In essence, compelling the network to predict multiple values can prove detrimental to model performance, even in the presence of informative covariates. These results suggest that LSTM architectures may not be suitable for forecasting tasks where predicting covariates would typically be expected to enhance model accuracy.
Related papers
- MGCP: A Multi-Grained Correlation based Prediction Network for Multivariate Time Series [54.91026286579748]
We propose a Multi-Grained Correlations-based Prediction Network.
It simultaneously considers correlations at three levels to enhance prediction performance.
It employs adversarial training with an attention mechanism-based predictor and conditional discriminator to optimize prediction results at coarse-grained level.
arXiv Detail & Related papers (2024-05-30T03:32:44Z) - When Rigidity Hurts: Soft Consistency Regularization for Probabilistic
Hierarchical Time Series Forecasting [69.30930115236228]
Probabilistic hierarchical time-series forecasting is an important variant of time-series forecasting.
Most methods focus on point predictions and do not provide well-calibrated probabilistic forecasts distributions.
We propose PROFHiT, a fully probabilistic hierarchical forecasting model that jointly models forecast distribution of entire hierarchy.
arXiv Detail & Related papers (2023-10-17T20:30:16Z) - The Capacity and Robustness Trade-off: Revisiting the Channel
Independent Strategy for Multivariate Time Series Forecasting [50.48888534815361]
We show that models trained with the Channel Independent (CI) strategy outperform those trained with the Channel Dependent (CD) strategy.
Our results conclude that the CD approach has higher capacity but often lacks robustness to accurately predict distributionally drifted time series.
We propose a modified CD method called Predict Residuals with Regularization (PRReg) that can surpass the CI strategy.
arXiv Detail & Related papers (2023-04-11T13:15:33Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - Probabilistic AutoRegressive Neural Networks for Accurate Long-range
Forecasting [6.295157260756792]
We introduce the Probabilistic AutoRegressive Neural Networks (PARNN)
PARNN is capable of handling complex time series data exhibiting non-stationarity, nonlinearity, non-seasonality, long-range dependence, and chaotic patterns.
We evaluate the performance of PARNN against standard statistical, machine learning, and deep learning models, including Transformers, NBeats, and DeepAR.
arXiv Detail & Related papers (2022-04-01T17:57:36Z) - A Statistics and Deep Learning Hybrid Method for Multivariate Time
Series Forecasting and Mortality Modeling [0.0]
Exponential Smoothing Recurrent Neural Network (ES-RNN) is a hybrid between a statistical forecasting model and a recurrent neural network variant.
ES-RNN achieves a 9.4% improvement in absolute error in the Makridakis-4 Forecasting Competition.
arXiv Detail & Related papers (2021-12-16T04:44:19Z) - Forecasting High-Dimensional Covariance Matrices of Asset Returns with
Hybrid GARCH-LSTMs [0.0]
This paper investigates the ability of hybrid models, mixing GARCH processes and neural networks, to forecast covariance matrices of asset returns.
The new model proposed is very promising as it not only outperforms the equally weighted portfolio, but also by a significant margin its econometric counterpart.
arXiv Detail & Related papers (2021-08-25T23:41:43Z) - A Functional Model for Structure Learning and Parameter Estimation in
Continuous Time Bayesian Network: An Application in Identifying Patterns of
Multiple Chronic Conditions [2.440763941001707]
We propose a continuous time Bayesian network with conditional dependencies, represented as Poisson regression.
We use a dataset of patients with multiple chronic conditions extracted from electronic health records of the Department of Veterans Affairs.
The proposed approach provides a sparse intuitive representation of the complex functional relationships between multiple chronic conditions.
arXiv Detail & Related papers (2020-07-31T05:02:34Z) - Spatiotemporal Adaptive Neural Network for Long-term Forecasting of
Financial Time Series [0.2793095554369281]
We investigate whether deep neural networks (DNNs) can be used to forecast time series (TS) forecasts conjointly.
We make use of the dynamic factor graph (DFG) to build a multivariate autoregressive model.
With ACTM, it is possible to vary the autoregressive order of a TS model over time and model a larger set of probability distributions.
arXiv Detail & Related papers (2020-03-27T00:53:11Z) - Ambiguity in Sequential Data: Predicting Uncertain Futures with
Recurrent Models [110.82452096672182]
We propose an extension of the Multiple Hypothesis Prediction (MHP) model to handle ambiguous predictions with sequential data.
We also introduce a novel metric for ambiguous problems, which is better suited to account for uncertainties.
arXiv Detail & Related papers (2020-03-10T09:15:42Z) - Transformer Hawkes Process [79.16290557505211]
We propose a Transformer Hawkes Process (THP) model, which leverages the self-attention mechanism to capture long-term dependencies.
THP outperforms existing models in terms of both likelihood and event prediction accuracy by a notable margin.
We provide a concrete example, where THP achieves improved prediction performance for learning multiple point processes when incorporating their relational information.
arXiv Detail & Related papers (2020-02-21T13:48:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.