Prediction of Hilbertian autoregressive processes : a Recurrent Neural
Network approach
- URL: http://arxiv.org/abs/2008.11155v1
- Date: Tue, 25 Aug 2020 16:43:24 GMT
- Title: Prediction of Hilbertian autoregressive processes : a Recurrent Neural
Network approach
- Authors: Cl\'{e]ment Carr\'e and Andr\'e Mas
- Abstract summary: We propose here to compare the classical prediction methodology based on the estimation of the autocorrelation operator with a neural network learning approach.
The latter is based on a popular version of Recurrent Neural Networks : the Long Short Term Memory networks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The autoregressive Hilbertian model (ARH) was introduced in the early 90's by
Denis Bosq. It was the subject of a vast literature and gave birth to numerous
extensions. The model generalizes the classical multidimensional autoregressive
model, widely used in Time Series Analysis. It was successfully applied in
numerous fields such as finance, industry, biology. We propose here to compare
the classical prediction methodology based on the estimation of the
autocorrelation operator with a neural network learning approach. The latter is
based on a popular version of Recurrent Neural Networks : the Long Short Term
Memory networks. The comparison is carried out through simulations and real
datasets.
Related papers
- A Dynamical Model of Neural Scaling Laws [79.59705237659547]
We analyze a random feature model trained with gradient descent as a solvable model of network training and generalization.
Our theory shows how the gap between training and test loss can gradually build up over time due to repeated reuse of data.
arXiv Detail & Related papers (2024-02-02T01:41:38Z) - Reducing Computational Costs in Sentiment Analysis: Tensorized Recurrent
Networks vs. Recurrent Networks [0.12891210250935145]
Anticipating audience reaction towards a certain text is integral to several facets of society ranging from politics, research, and commercial industries.
Sentiment analysis (SA) is a useful natural language processing (NLP) technique that utilizes lexical/statistical and deep learning methods to determine whether different-sized texts exhibit positive, negative, or neutral emotions.
arXiv Detail & Related papers (2023-06-16T09:18:08Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - ARMA Cell: A Modular and Effective Approach for Neural Autoregressive
Modeling [0.0]
We introduce the ARMA cell, a simpler, modular, and effective approach for time series modeling in neural networks.
Our experiments show that the proposed methodology is competitive with popular alternatives in terms of performance.
arXiv Detail & Related papers (2022-08-31T15:23:10Z) - On the balance between the training time and interpretability of neural
ODE for time series modelling [77.34726150561087]
The paper shows that modern neural ODE cannot be reduced to simpler models for time-series modelling applications.
The complexity of neural ODE is compared to or exceeds the conventional time-series modelling tools.
We propose a new view on time-series modelling using combined neural networks and an ODE system approach.
arXiv Detail & Related papers (2022-06-07T13:49:40Z) - DeepBayes -- an estimator for parameter estimation in stochastic
nonlinear dynamical models [11.917949887615567]
We propose DeepBayes estimators that leverage the power of deep recurrent neural networks in learning an estimator.
The deep recurrent neural network architectures can be trained offline and ensure significant time savings during inference.
We demonstrate the applicability of our proposed method on different example models and perform detailed comparisons with state-of-the-art approaches.
arXiv Detail & Related papers (2022-05-04T18:12:17Z) - An advanced spatio-temporal convolutional recurrent neural network for
storm surge predictions [73.4962254843935]
We study the capability of artificial neural network models to emulate storm surge based on the storm track/size/intensity history.
This study presents a neural network model that can predict storm surge, informed by a database of synthetic storm simulations.
arXiv Detail & Related papers (2022-04-18T23:42:18Z) - Stochastic Recurrent Neural Network for Multistep Time Series
Forecasting [0.0]
We leverage advances in deep generative models and the concept of state space models to propose an adaptation of the recurrent neural network for time series forecasting.
Our model preserves the architectural workings of a recurrent neural network for which all relevant information is encapsulated in its hidden states, and this flexibility allows our model to be easily integrated into any deep architecture for sequential modelling.
arXiv Detail & Related papers (2021-04-26T01:43:43Z) - A Bayesian Perspective on Training Speed and Model Selection [51.15664724311443]
We show that a measure of a model's training speed can be used to estimate its marginal likelihood.
We verify our results in model selection tasks for linear models and for the infinite-width limit of deep neural networks.
Our results suggest a promising new direction towards explaining why neural networks trained with gradient descent are biased towards functions that generalize well.
arXiv Detail & Related papers (2020-10-27T17:56:14Z) - Amortized Bayesian Inference for Models of Cognition [0.1529342790344802]
Recent advances in simulation-based inference using specialized neural network architectures circumvent many previous problems of approximate Bayesian computation.
We provide a general introduction to amortized Bayesian parameter estimation and model comparison.
arXiv Detail & Related papers (2020-05-08T08:12:15Z) - Forecasting Sequential Data using Consistent Koopman Autoencoders [52.209416711500005]
A new class of physics-based methods related to Koopman theory has been introduced, offering an alternative for processing nonlinear dynamical systems.
We propose a novel Consistent Koopman Autoencoder model which, unlike the majority of existing work, leverages the forward and backward dynamics.
Key to our approach is a new analysis which explores the interplay between consistent dynamics and their associated Koopman operators.
arXiv Detail & Related papers (2020-03-04T18:24:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.