Industrial Forecasting with Exponentially Smoothed Recurrent Neural
Networks
- URL: http://arxiv.org/abs/2004.04717v2
- Date: Fri, 30 Oct 2020 16:54:40 GMT
- Title: Industrial Forecasting with Exponentially Smoothed Recurrent Neural
Networks
- Authors: Matthew F Dixon
- Abstract summary: We present a class of exponential smoothed recurrent neural networks (RNNs) which are well suited to modeling non-stationary dynamical systems arising in industrial applications.
Application of exponentially smoothed RNNs to forecasting electricity load, weather data, and stock prices highlight the efficacy of exponential smoothing of the hidden state for multi-step time series forecasting.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Time series modeling has entered an era of unprecedented growth in the size
and complexity of data which require new modeling approaches. While many new
general purpose machine learning approaches have emerged, they remain poorly
understand and irreconcilable with more traditional statistical modeling
approaches. We present a general class of exponential smoothed recurrent neural
networks (RNNs) which are well suited to modeling non-stationary dynamical
systems arising in industrial applications. In particular, we analyze their
capacity to characterize the non-linear partial autocorrelation structure of
time series and directly capture dynamic effects such as seasonality and
trends. Application of exponentially smoothed RNNs to forecasting electricity
load, weather data, and stock prices highlight the efficacy of exponential
smoothing of the hidden state for multi-step time series forecasting. The
results also suggest that popular, but more complicated neural network
architectures originally designed for speech processing, such as LSTMs and
GRUs, are likely over-engineered for industrial forecasting and light-weight
exponentially smoothed architectures, trained in a fraction of the time,
capture the salient features while being superior and more robust than simple
RNNs and ARIMA models. Additionally uncertainty quantification of the
exponential smoothed recurrent neural networks, provided by Bayesian
estimation, is shown to provide improved coverage.
Related papers
- Deconstructing Recurrence, Attention, and Gating: Investigating the transferability of Transformers and Gated Recurrent Neural Networks in forecasting of dynamical systems [0.0]
We decompose the key architectural components of the most powerful neural architectures, namely gating and recurrence in RNNs, and attention mechanisms in transformers.
A key finding is that neural gating and attention improves the accuracy of all standard RNNs in most tasks, while the addition of a notion of recurrence in transformers is detrimental.
arXiv Detail & Related papers (2024-10-03T16:41:51Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Brain-Inspired Spiking Neural Network for Online Unsupervised Time
Series Prediction [13.521272923545409]
We present a novel Continuous Learning-based Unsupervised Recurrent Spiking Neural Network Model (CLURSNN)
CLURSNN makes online predictions by reconstructing the underlying dynamical system using Random Delay Embedding.
We show that the proposed online time series prediction methodology outperforms state-of-the-art DNN models when predicting an evolving Lorenz63 dynamical system.
arXiv Detail & Related papers (2023-04-10T16:18:37Z) - Online Evolutionary Neural Architecture Search for Multivariate
Non-Stationary Time Series Forecasting [72.89994745876086]
This work presents the Online Neuro-Evolution-based Neural Architecture Search (ONE-NAS) algorithm.
ONE-NAS is a novel neural architecture search method capable of automatically designing and dynamically training recurrent neural networks (RNNs) for online forecasting tasks.
Results demonstrate that ONE-NAS outperforms traditional statistical time series forecasting methods.
arXiv Detail & Related papers (2023-02-20T22:25:47Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - ONE-NAS: An Online NeuroEvolution based Neural Architecture Search for
Time Series Forecasting [3.3758186776249928]
This work presents the Online NeuroEvolution based Neural Architecture Search (ONE-NAS) algorithm.
ONE-NAS is the first neural architecture search algorithm capable of automatically designing and training new recurrent neural networks (RNNs) in an online setting.
It is shown to outperform traditional statistical time series forecasting, including naive, moving average, and exponential smoothing methods.
arXiv Detail & Related papers (2022-02-27T22:58:32Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Sparse Flows: Pruning Continuous-depth Models [107.98191032466544]
We show that pruning improves generalization for neural ODEs in generative modeling.
We also show that pruning finds minimal and efficient neural ODE representations with up to 98% less parameters compared to the original network, without loss of accuracy.
arXiv Detail & Related papers (2021-06-24T01:40:17Z) - Stochastic Recurrent Neural Network for Multistep Time Series
Forecasting [0.0]
We leverage advances in deep generative models and the concept of state space models to propose an adaptation of the recurrent neural network for time series forecasting.
Our model preserves the architectural workings of a recurrent neural network for which all relevant information is encapsulated in its hidden states, and this flexibility allows our model to be easily integrated into any deep architecture for sequential modelling.
arXiv Detail & Related papers (2021-04-26T01:43:43Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.