Stochastic Recurrent Neural Network for Multistep Time Series
Forecasting
- URL: http://arxiv.org/abs/2104.12311v1
- Date: Mon, 26 Apr 2021 01:43:43 GMT
- Title: Stochastic Recurrent Neural Network for Multistep Time Series
Forecasting
- Authors: Zexuan Yin, Paolo Barucca
- Abstract summary: We leverage advances in deep generative models and the concept of state space models to propose an adaptation of the recurrent neural network for time series forecasting.
Our model preserves the architectural workings of a recurrent neural network for which all relevant information is encapsulated in its hidden states, and this flexibility allows our model to be easily integrated into any deep architecture for sequential modelling.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Time series forecasting based on deep architectures has been gaining
popularity in recent years due to their ability to model complex non-linear
temporal dynamics. The recurrent neural network is one such model capable of
handling variable-length input and output. In this paper, we leverage recent
advances in deep generative models and the concept of state space models to
propose a stochastic adaptation of the recurrent neural network for
multistep-ahead time series forecasting, which is trained with stochastic
gradient variational Bayes. In our model design, the transition function of the
recurrent neural network, which determines the evolution of the hidden states,
is stochastic rather than deterministic as in a regular recurrent neural
network; this is achieved by incorporating a latent random variable into the
transition process which captures the stochasticity of the temporal dynamics.
Our model preserves the architectural workings of a recurrent neural network
for which all relevant information is encapsulated in its hidden states, and
this flexibility allows our model to be easily integrated into any deep
architecture for sequential modelling. We test our model on a wide range of
datasets from finance to healthcare; results show that the stochastic recurrent
neural network consistently outperforms its deterministic counterpart.
Related papers
- Gradient-free training of recurrent neural networks [3.272216546040443]
We introduce a computational approach to construct all weights and biases of a recurrent neural network without using gradient-based methods.
The approach is based on a combination of random feature networks and Koopman operator theory for dynamical systems.
In computational experiments on time series, forecasting for chaotic dynamical systems, and control problems, we observe that the training time and forecasting accuracy of the recurrent neural networks we construct are improved.
arXiv Detail & Related papers (2024-10-30T21:24:34Z) - Recurrent Interpolants for Probabilistic Time Series Prediction [10.422645245061899]
Sequential models like recurrent neural networks and transformers have become standard for probabilistic time series forecasting.
Recent work explores generative approaches using diffusion or flow-based models, extending to time series imputation and forecasting.
This work proposes a novel method combining recurrent neural networks' efficiency with diffusion models' probabilistic modeling, based on interpolants and conditional generation with control features.
arXiv Detail & Related papers (2024-09-18T03:52:48Z) - A short trajectory is all you need: A transformer-based model for long-time dissipative quantum dynamics [0.0]
We show that a deep artificial neural network can predict the long-time population dynamics of a quantum system coupled to a dissipative environment.
Our model is more accurate than classical forecasting models, such as recurrent neural networks.
arXiv Detail & Related papers (2024-09-17T16:17:52Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - An advanced spatio-temporal convolutional recurrent neural network for
storm surge predictions [73.4962254843935]
We study the capability of artificial neural network models to emulate storm surge based on the storm track/size/intensity history.
This study presents a neural network model that can predict storm surge, informed by a database of synthetic storm simulations.
arXiv Detail & Related papers (2022-04-18T23:42:18Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Modeling the Nonsmoothness of Modern Neural Networks [35.93486244163653]
We quantify the nonsmoothness using a feature named the sum of the magnitude of peaks (SMP)
We envision that the nonsmoothness feature can potentially be used as a forensic tool for regression-based applications of neural networks.
arXiv Detail & Related papers (2021-03-26T20:55:19Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.