Prediction of financial time series using LSTM and data denoising
methods
- URL: http://arxiv.org/abs/2103.03505v1
- Date: Fri, 5 Mar 2021 07:32:36 GMT
- Title: Prediction of financial time series using LSTM and data denoising
methods
- Authors: Qi Tang and Tongmei Fan and Ruchen Shi and Jingyan Huang and Yidan Ma
- Abstract summary: This paper proposes an ensemble method based on data denoising methods, including the wavelet transform (WT) and singular spectrum analysis (SSA)
As WT and SSA can extract useful information from the original sequence and avoid overfitting, the hybrid model can better grasp the sequence pattern of the closing price of the DJIA.
- Score: 0.29923891863939933
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In order to further overcome the difficulties of the existing models in
dealing with the non-stationary and nonlinear characteristics of high-frequency
financial time series data, especially its weak generalization ability, this
paper proposes an ensemble method based on data denoising methods, including
the wavelet transform (WT) and singular spectrum analysis (SSA), and long-term
short-term memory neural network (LSTM) to build a data prediction model, The
financial time series is decomposed and reconstructed by WT and SSA to denoise.
Under the condition of denoising, the smooth sequence with effective
information is reconstructed. The smoothing sequence is introduced into LSTM
and the predicted value is obtained. With the Dow Jones industrial average
index (DJIA) as the research object, the closing price of the DJIA every five
minutes is divided into short-term (1 hour), medium-term (3 hours) and
long-term (6 hours) respectively. . Based on root mean square error (RMSE),
mean absolute error (MAE), mean absolute percentage error (MAPE) and absolute
percentage error standard deviation (SDAPE), the experimental results show that
in the short-term, medium-term and long-term, data denoising can greatly
improve the accuracy and stability of the prediction, and can effectively
improve the generalization ability of LSTM prediction model. As WT and SSA can
extract useful information from the original sequence and avoid overfitting,
the hybrid model can better grasp the sequence pattern of the closing price of
the DJIA. And the WT-LSTM model is better than the benchmark LSTM model and
SSA-LSTM model.
Related papers
- Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - To Repeat or Not To Repeat: Insights from Scaling LLM under Token-Crisis [50.31589712761807]
Large language models (LLMs) are notoriously token-hungry during pre-training, and high-quality text data on the web is approaching its scaling limit for LLMs.
We investigate the consequences of repeating pre-training data, revealing that the model is susceptible to overfitting.
Second, we examine the key factors contributing to multi-epoch degradation, finding that significant factors include dataset size, model parameters, and training objectives.
arXiv Detail & Related papers (2023-05-22T17:02:15Z) - Extreme-Long-short Term Memory for Time-series Prediction [0.0]
Long Short-Term Memory (LSTM) is a new type of Recurrent Neural Networks (RNN)
In this paper, we propose an advanced LSTM algorithm, the Extreme Long Short-Term Memory (E-LSTM)
The new E-LSTM requires only 2 epochs to obtain the results of the 7th epoch traditional LSTM.
arXiv Detail & Related papers (2022-10-15T09:45:48Z) - DeepVol: Volatility Forecasting from High-Frequency Data with Dilated Causal Convolutions [53.37679435230207]
We propose DeepVol, a model based on Dilated Causal Convolutions that uses high-frequency data to forecast day-ahead volatility.
Our empirical results suggest that the proposed deep learning-based approach effectively learns global features from high-frequency data.
arXiv Detail & Related papers (2022-09-23T16:13:47Z) - Parameter estimation for WMTI-Watson model of white matter using
encoder-decoder recurrent neural network [0.0]
In this study, we evaluate the performance of NLLS, the RNN-based method and a multilayer perceptron (MLP) on datasets rat and human brain.
We showed that the proposed RNN-based fitting approach had the advantage of highly reduced computation time over NLLS.
arXiv Detail & Related papers (2022-03-01T16:33:15Z) - Long Short-Term Memory Neural Network for Financial Time Series [0.0]
We present an ensemble of independent and parallel long short-term memory neural networks for the prediction of stock price movement.
With a straightforward trading strategy, comparisons with a randomly chosen portfolio and a portfolio containing all the stocks in the index show that the portfolio resulting from the LSTM ensemble provides better average daily returns and higher cumulative returns over time.
arXiv Detail & Related papers (2022-01-20T15:17:26Z) - Macroeconomic forecasting with LSTM and mixed frequency time series data [0.0]
We first present how theconventional LSTM model can be adapted to the time series observed at mixed frequencies.
We then adopt the unrestricted Mixed DAtaSampling scheme (U-MIDAS) into the LSTM architecture.
Our proposed model could be very helpful in the period of large economic downturns for short-termforecast.
arXiv Detail & Related papers (2021-09-28T14:56:37Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - Learning representations with end-to-end models for improved remaining
useful life prognostics [64.80885001058572]
The remaining Useful Life (RUL) of equipment is defined as the duration between the current time and its failure.
We propose an end-to-end deep learning model based on multi-layer perceptron and long short-term memory layers (LSTM) to predict the RUL.
We will discuss how the proposed end-to-end model is able to achieve such good results and compare it to other deep learning and state-of-the-art methods.
arXiv Detail & Related papers (2021-04-11T16:45:18Z) - Deep Stock Predictions [58.720142291102135]
We consider the design of a trading strategy that performs portfolio optimization using Long Short Term Memory (LSTM) neural networks.
We then customize the loss function used to train the LSTM to increase the profit earned.
We find the LSTM model with the customized loss function to have an improved performance in the training bot over a regressive baseline such as ARIMA.
arXiv Detail & Related papers (2020-06-08T23:37:47Z) - Ensemble long short-term memory (EnLSTM) network [0.456877715768796]
We propose an ensemble long short-term memory (EnLSTM) network, which can be trained on a small dataset and process sequential data.
The EnLSTM is proven to be the state-of-the-art model in generating well logs with a mean-square-error (MSE) reduction of 34%.
arXiv Detail & Related papers (2020-04-26T05:42:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.