Real-time Forecasting of Time Series in Financial Markets Using
Sequentially Trained Many-to-one LSTMs
- URL: http://arxiv.org/abs/2205.04678v1
- Date: Tue, 10 May 2022 05:18:45 GMT
- Title: Real-time Forecasting of Time Series in Financial Markets Using
Sequentially Trained Many-to-one LSTMs
- Authors: Kelum Gajamannage and Yonggi Park
- Abstract summary: We train two LSTMs with a known length, say $T$ time steps, of previous data and predict only one time step ahead.
While one LSTM is employed to find the best number of epochs, the second LSTM is trained only for the best number of epochs to make predictions.
We treat the current prediction as in the training set for the next prediction and train the same LSTM.
- Score: 0.304585143845864
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Financial markets are highly complex and volatile; thus, learning about such
markets for the sake of making predictions is vital to make early alerts about
crashes and subsequent recoveries. People have been using learning tools from
diverse fields such as financial mathematics and machine learning in the
attempt of making trustworthy predictions on such markets. However, the
accuracy of such techniques had not been adequate until artificial neural
network (ANN) frameworks were developed. Moreover, making accurate real-time
predictions of financial time series is highly subjective to the ANN
architecture in use and the procedure of training it. Long short-term memory
(LSTM) is a member of the recurrent neural network family which has been widely
utilized for time series predictions. Especially, we train two LSTMs with a
known length, say $T$ time steps, of previous data and predict only one time
step ahead. At each iteration, while one LSTM is employed to find the best
number of epochs, the second LSTM is trained only for the best number of epochs
to make predictions. We treat the current prediction as in the training set for
the next prediction and train the same LSTM. While classic ways of training
result in more error when the predictions are made further away in the test
period, our approach is capable of maintaining a superior accuracy as training
increases when it proceeds through the testing period. The forecasting accuracy
of our approach is validated using three time series from each of the three
diverse financial markets: stock, cryptocurrency, and commodity. The results
are compared with those of an extended Kalman filter, an autoregressive model,
and an autoregressive integrated moving average model.
Related papers
- Indian Stock Market Prediction using Augmented Financial Intelligence ML [0.0]
This paper presents price prediction models using Machine Learning algorithms augmented with Superforecasters predictions.
The models are evaluated using the Mean Absolute Error to determine their predictive accuracy.
The main goal is to identify Superforecasters and track their predictions to anticipate unpredictable shifts or changes in stock prices.
arXiv Detail & Related papers (2024-07-02T12:58:50Z) - Diffusion Variational Autoencoder for Tackling Stochasticity in
Multi-Step Regression Stock Price Prediction [54.21695754082441]
Multi-step stock price prediction over a long-term horizon is crucial for forecasting its volatility.
Current solutions to multi-step stock price prediction are mostly designed for single-step, classification-based predictions.
We combine a deep hierarchical variational-autoencoder (VAE) and diffusion probabilistic techniques to do seq2seq stock prediction.
Our model is shown to outperform state-of-the-art solutions in terms of its prediction accuracy and variance.
arXiv Detail & Related papers (2023-08-18T16:21:15Z) - Making Pre-trained Language Models both Task-solvers and
Self-calibrators [52.98858650625623]
Pre-trained language models (PLMs) serve as backbones for various real-world systems.
Previous work shows that introducing an extra calibration task can mitigate this issue.
We propose a training algorithm LM-TOAST to tackle the challenges.
arXiv Detail & Related papers (2023-07-21T02:51:41Z) - Short-Term Stock Price Forecasting using exogenous variables and Machine
Learning Algorithms [3.2732602885346576]
This research paper compares four machine learning models and their accuracy in forecasting three well-known stocks traded in the NYSE from March 2020 to May 2022.
We deploy, develop, and tune XGBoost, Random Forest, Multi-layer Perceptron, and Support Vector Regression models.
Using a training data set of 240 trading days, we find that XGBoost gives the highest accuracy despite running longer.
arXiv Detail & Related papers (2023-05-17T07:04:32Z) - Stabilizing Machine Learning Prediction of Dynamics: Noise and
Noise-inspired Regularization [58.720142291102135]
Recent has shown that machine learning (ML) models can be trained to accurately forecast the dynamics of chaotic dynamical systems.
In the absence of mitigating techniques, this technique can result in artificially rapid error growth, leading to inaccurate predictions and/or climate instability.
We introduce Linearized Multi-Noise Training (LMNT), a regularization technique that deterministically approximates the effect of many small, independent noise realizations added to the model input during training.
arXiv Detail & Related papers (2022-11-09T23:40:52Z) - Univariate and Multivariate LSTM Model for Short-Term Stock Market
Prediction [1.6114012813668934]
This paper presents an LSTM model with two different input approaches for predicting the short-term stock prices of two Indian companies.
Ten years of historic data (2012-2021) is taken from the yahoo finance website to carry out analysis of proposed approaches.
arXiv Detail & Related papers (2022-05-08T07:01:12Z) - Bayesian Bilinear Neural Network for Predicting the Mid-price Dynamics
in Limit-Order Book Markets [84.90242084523565]
Traditional time-series econometric methods often appear incapable of capturing the true complexity of the multi-level interactions driving the price dynamics.
By adopting a state-of-the-art second-order optimization algorithm, we train a Bayesian bilinear neural network with temporal attention.
By addressing the use of predictive distributions to analyze errors and uncertainties associated with the estimated parameters and model forecasts, we thoroughly compare our Bayesian model with traditional ML alternatives.
arXiv Detail & Related papers (2022-03-07T18:59:54Z) - Long Short-Term Memory Neural Network for Financial Time Series [0.0]
We present an ensemble of independent and parallel long short-term memory neural networks for the prediction of stock price movement.
With a straightforward trading strategy, comparisons with a randomly chosen portfolio and a portfolio containing all the stocks in the index show that the portfolio resulting from the LSTM ensemble provides better average daily returns and higher cumulative returns over time.
arXiv Detail & Related papers (2022-01-20T15:17:26Z) - Bilinear Input Normalization for Neural Networks in Financial
Forecasting [101.89872650510074]
We propose a novel data-driven normalization method for deep neural networks that handle high-frequency financial time-series.
The proposed normalization scheme takes into account the bimodal characteristic of financial time-series.
Our experiments, conducted with state-of-the-arts neural networks and high-frequency data, show significant improvements over other normalization techniques.
arXiv Detail & Related papers (2021-09-01T07:52:03Z) - A Deep Learning Framework for Predicting Digital Asset Price Movement
from Trade-by-trade Data [20.392440676633573]
This paper presents a framework that predicts price movement of cryptocurrencies from trade-by-trade data.
The model is trained to achieve high performance on nearly a year of trade-by-trade data.
In a realistic trading simulation setting, the prediction made by the model could be easily monetized.
arXiv Detail & Related papers (2020-10-11T10:42:02Z) - Deep Stock Predictions [58.720142291102135]
We consider the design of a trading strategy that performs portfolio optimization using Long Short Term Memory (LSTM) neural networks.
We then customize the loss function used to train the LSTM to increase the profit earned.
We find the LSTM model with the customized loss function to have an improved performance in the training bot over a regressive baseline such as ARIMA.
arXiv Detail & Related papers (2020-06-08T23:37:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.