Evaluation of deep learning models for multi-step ahead time series
prediction
- URL: http://arxiv.org/abs/2103.14250v1
- Date: Fri, 26 Mar 2021 04:07:11 GMT
- Title: Evaluation of deep learning models for multi-step ahead time series
prediction
- Authors: Rohitash Chandra, Shaurya Goyal, Rishabh Gupta
- Abstract summary: We present an evaluation study that compares the performance of deep learning models for multi-step ahead time series prediction.
Our deep learning methods compromise of simple recurrent neural networks, long short term memory (LSTM) networks, bidirectional LSTM, encoder-decoder LSTM networks, and convolutional neural networks.
- Score: 1.3764085113103222
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Time series prediction with neural networks have been focus of much research
in the past few decades. Given the recent deep learning revolution, there has
been much attention in using deep learning models for time series prediction,
and hence it is important to evaluate their strengths and weaknesses. In this
paper, we present an evaluation study that compares the performance of deep
learning models for multi-step ahead time series prediction. Our deep learning
methods compromise of simple recurrent neural networks, long short term memory
(LSTM) networks, bidirectional LSTM, encoder-decoder LSTM networks, and
convolutional neural networks. We also provide comparison with simple neural
networks use stochastic gradient descent and adaptive gradient method (Adam)
for training. We focus on univariate and multi-step-ahead prediction from
benchmark time series datasets and compare with results from from the
literature. The results show that bidirectional and encoder-decoder LSTM
provide the best performance in accuracy for the given time series problems
with different properties.
Related papers
- FocusLearn: Fully-Interpretable, High-Performance Modular Neural Networks for Time Series [0.3277163122167434]
This paper proposes a novel modular neural network model for time series prediction that is interpretable by construction.
A recurrent neural network learns the temporal dependencies in the data while an attention-based feature selection component selects the most relevant features.
A modular deep network is trained from the selected features independently to show the users how features influence outcomes, making the model interpretable.
arXiv Detail & Related papers (2023-11-28T14:51:06Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - A Comparative Study of Detecting Anomalies in Time Series Data Using
LSTM and TCN Models [2.007262412327553]
This paper compares two prominent deep learning modeling techniques.
The Recurrent Neural Network (RNN)-based Long Short-Term Memory (LSTM) and the convolutional Neural Network (CNN)-based Temporal Convolutional Networks (TCN) are compared.
arXiv Detail & Related papers (2021-12-17T02:46:55Z) - An Experimental Review on Deep Learning Architectures for Time Series
Forecasting [0.0]
We provide the most extensive deep learning study for time series forecasting.
Among all studied models, the results show that long short-term memory (LSTM) and convolutional networks (CNN) are the best alternatives.
CNNs achieve comparable performance with less variability of results under different parameter configurations, while also being more efficient.
arXiv Detail & Related papers (2021-03-22T17:58:36Z) - Meta-Learning for Koopman Spectral Analysis with Short Time-series [49.41640137945938]
Existing methods require long time-series for training neural networks.
We propose a meta-learning method for estimating embedding functions from unseen short time-series.
We experimentally demonstrate that the proposed method achieves better performance in terms of eigenvalue estimation and future prediction.
arXiv Detail & Related papers (2021-02-09T07:19:19Z) - Improved Predictive Deep Temporal Neural Networks with Trend Filtering [22.352437268596674]
We propose a new prediction framework based on deep neural networks and a trend filtering.
We reveal that the predictive performance of deep temporal neural networks improves when the training data is temporally processed by a trend filtering.
arXiv Detail & Related papers (2020-10-16T08:29:36Z) - Dynamic Time Warping as a New Evaluation for Dst Forecast with Machine
Learning [0.0]
We train a neural network to make a forecast of the disturbance storm time index at origin time $t$ with a forecasting horizon of 1 up to 6 hours.
Inspection of the model's results with the correlation coefficient and RMSE indicated a performance comparable to the latest publications.
A new method is proposed to measure whether two time series are shifted in time with respect to each other.
arXiv Detail & Related papers (2020-06-08T15:14:13Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - Convolutional Tensor-Train LSTM for Spatio-temporal Learning [116.24172387469994]
We propose a higher-order LSTM model that can efficiently learn long-term correlations in the video sequence.
This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time.
Our results achieve state-of-the-art performance-art in a wide range of applications and datasets.
arXiv Detail & Related papers (2020-02-21T05:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.