Generative Adversarial Network (GAN) and Enhanced Root Mean Square Error
(ERMSE): Deep Learning for Stock Price Movement Prediction
- URL: http://arxiv.org/abs/2112.03946v1
- Date: Tue, 30 Nov 2021 18:38:59 GMT
- Title: Generative Adversarial Network (GAN) and Enhanced Root Mean Square Error
(ERMSE): Deep Learning for Stock Price Movement Prediction
- Authors: Ashish Kumar, Abeer Alsadoon, P. W. C. Prasad, Salma Abdullah, Tarik
A. Rashid, Duong Thu Hang Pham, Tran Quoc Vinh Nguyen
- Abstract summary: This paper aims to improve prediction accuracy and minimize forecasting error loss by using Generative Adversarial Networks.
It was found that the Generative Adversarial Network (GAN) has performed well on the enhanced root mean square error to LSTM.
- Score: 15.165487282631535
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The prediction of stock price movement direction is significant in financial
circles and academic. Stock price contains complex, incomplete, and fuzzy
information which makes it an extremely difficult task to predict its
development trend. Predicting and analysing financial data is a nonlinear,
time-dependent problem. With rapid development in machine learning and deep
learning, this task can be performed more effectively by a purposely designed
network. This paper aims to improve prediction accuracy and minimizing
forecasting error loss through deep learning architecture by using Generative
Adversarial Networks. It was proposed a generic model consisting of Phase-space
Reconstruction (PSR) method for reconstructing price series and Generative
Adversarial Network (GAN) which is a combination of two neural networks which
are Long Short-Term Memory (LSTM) as Generative model and Convolutional Neural
Network (CNN) as Discriminative model for adversarial training to forecast the
stock market. LSTM will generate new instances based on historical basic
indicators information and then CNN will estimate whether the data is predicted
by LSTM or is real. It was found that the Generative Adversarial Network (GAN)
has performed well on the enhanced root mean square error to LSTM, as it was
4.35% more accurate in predicting the direction and reduced processing time and
RMSE by 78 secs and 0.029, respectively. This study provides a better result in
the accuracy of the stock index. It seems that the proposed system concentrates
on minimizing the root mean square error and processing time and improving the
direction prediction accuracy, and provides a better result in the accuracy of
the stock index.
Related papers
- GARCH-Informed Neural Networks for Volatility Prediction in Financial Markets [0.0]
We present a new, hybrid Deep Learning model that captures and forecasting market volatility more accurately than either class of models are capable of on their own.
When compared to other time series models, GINN showed superior out-of-sample prediction performance in terms of the Coefficient of Determination ($R2$), Mean Squared Error (MSE), and Mean Absolute Error (MAE)
arXiv Detail & Related papers (2024-09-30T23:53:54Z) - A Study on Stock Forecasting Using Deep Learning and Statistical Models [3.437407981636465]
This paper will review many deep learning algorithms for stock price forecasting. We use a record of s&p 500 index data for training and testing.
It will discuss various models, including the Auto regression integration moving average model, the Recurrent neural network model, the long short-term model, the convolutional neural network model, and the full convolutional neural network model.
arXiv Detail & Related papers (2024-02-08T16:45:01Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - DNNAbacus: Toward Accurate Computational Cost Prediction for Deep Neural
Networks [0.9896984829010892]
This paper investigates the computational resource demands of 29 classical deep neural networks and builds accurate models for predicting computational costs.
We propose a lightweight prediction approach DNNAbacus with a novel network structural matrix for network representation.
Our experimental results show that the mean relative error (MRE) is 0.9% with respect to time and 2.8% with respect to memory for 29 classic models, which is much lower than the state-of-the-art works.
arXiv Detail & Related papers (2022-05-24T14:21:27Z) - Attention-based CNN-LSTM and XGBoost hybrid model for stock prediction [7.231134145443057]
This paper proposes an attention-based CNN-LSTM and XGBoost hybrid model to predict the stock price.
The model can fully mine the historical information of the stock market in multiple periods.
The results show that the hybrid model is more effective and the prediction accuracy is relatively high.
arXiv Detail & Related papers (2022-04-06T07:06:30Z) - S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
Networks via Guided Distribution Calibration [74.5509794733707]
We present a novel guided learning paradigm from real-valued to distill binary networks on the final prediction distribution.
Our proposed method can boost the simple contrastive learning baseline by an absolute gain of 5.515% on BNNs.
Our method achieves substantial improvement over the simple contrastive learning baseline, and is even comparable to many mainstream supervised BNN methods.
arXiv Detail & Related papers (2021-02-17T18:59:28Z) - A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via
Adversarial Fine-tuning [90.44219200633286]
We propose a simple yet very effective adversarial fine-tuning approach based on a $textitslow start, fast decay$ learning rate scheduling strategy.
Experimental results show that the proposed adversarial fine-tuning approach outperforms the state-of-the-art methods on CIFAR-10, CIFAR-100 and ImageNet datasets.
arXiv Detail & Related papers (2020-12-25T20:50:15Z) - A Meta-Learning Approach to the Optimal Power Flow Problem Under
Topology Reconfigurations [69.73803123972297]
We propose a DNN-based OPF predictor that is trained using a meta-learning (MTL) approach.
The developed OPF-predictor is validated through simulations using benchmark IEEE bus systems.
arXiv Detail & Related papers (2020-12-21T17:39:51Z) - A Novel Ensemble Deep Learning Model for Stock Prediction Based on Stock
Prices and News [7.578363431637128]
This paper proposes to use sentiment analysis to extract useful information from multiple textual data sources to predict future stock movement.
The blending ensemble model contains two levels. The first level contains two Recurrent Neural Networks (RNNs), one Long-Short Term Memory network (LSTM) and one Gated Recurrent Units network (GRU)
The fully connected neural network is used to ensemble several individual prediction results to further improve the prediction accuracy.
arXiv Detail & Related papers (2020-07-23T15:25:37Z) - Deep Stock Predictions [58.720142291102135]
We consider the design of a trading strategy that performs portfolio optimization using Long Short Term Memory (LSTM) neural networks.
We then customize the loss function used to train the LSTM to increase the profit earned.
We find the LSTM model with the customized loss function to have an improved performance in the training bot over a regressive baseline such as ARIMA.
arXiv Detail & Related papers (2020-06-08T23:37:47Z) - Neural Networks and Value at Risk [59.85784504799224]
We perform Monte-Carlo simulations of asset returns for Value at Risk threshold estimation.
Using equity markets and long term bonds as test assets, we investigate neural networks.
We find our networks when fed with substantially less data to perform significantly worse.
arXiv Detail & Related papers (2020-05-04T17:41:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.