Stock Price Prediction using Dynamic Neural Networks
- URL: http://arxiv.org/abs/2306.12969v1
- Date: Sun, 18 Jun 2023 20:06:44 GMT
- Title: Stock Price Prediction using Dynamic Neural Networks
- Authors: David Noel
- Abstract summary: This paper will analyze and implement a time series dynamic neural network to predict daily closing stock prices.
Neural networks possess unsurpassed abilities in identifying underlying patterns in chaotic, non-linear, and seemingly random data.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper will analyze and implement a time series dynamic neural network to
predict daily closing stock prices. Neural networks possess unsurpassed
abilities in identifying underlying patterns in chaotic, non-linear, and
seemingly random data, thus providing a mechanism to predict stock price
movements much more precisely than many current techniques. Contemporary
methods for stock analysis, including fundamental, technical, and regression
techniques, are conversed and paralleled with the performance of neural
networks. Also, the Efficient Market Hypothesis (EMH) is presented and
contrasted with Chaos theory using neural networks. This paper will refute the
EMH and support Chaos theory. Finally, recommendations for using neural
networks in stock price prediction will be presented.
Related papers
- Enhancing Price Prediction in Cryptocurrency Using Transformer Neural
Network and Technical Indicators [0.5439020425819]
methodology integrates the use of technical indicators, a Performer neural network, and BiLSTM.
The proposed method has been applied to the hourly and daily timeframes of the major cryptocurrencies.
arXiv Detail & Related papers (2024-03-06T10:53:12Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - The cross-sectional stock return predictions via quantum neural network
and tensor network [0.0]
We investigate the application of quantum and quantum-inspired machine learning algorithms to stock return predictions.
We evaluate the performance of quantum neural network, an algorithm suited for noisy intermediate-scale quantum computers, and tensor network, a quantum-inspired machine learning algorithm.
arXiv Detail & Related papers (2023-04-25T00:05:13Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Bayesian Bilinear Neural Network for Predicting the Mid-price Dynamics
in Limit-Order Book Markets [84.90242084523565]
Traditional time-series econometric methods often appear incapable of capturing the true complexity of the multi-level interactions driving the price dynamics.
By adopting a state-of-the-art second-order optimization algorithm, we train a Bayesian bilinear neural network with temporal attention.
By addressing the use of predictive distributions to analyze errors and uncertainties associated with the estimated parameters and model forecasts, we thoroughly compare our Bayesian model with traditional ML alternatives.
arXiv Detail & Related papers (2022-03-07T18:59:54Z) - Neural Capacitance: A New Perspective of Neural Network Selection via
Edge Dynamics [85.31710759801705]
Current practice requires expensive computational costs in model training for performance prediction.
We propose a novel framework for neural network selection by analyzing the governing dynamics over synaptic connections (edges) during training.
Our framework is built on the fact that back-propagation during neural network training is equivalent to the dynamical evolution of synaptic connections.
arXiv Detail & Related papers (2022-01-11T20:53:15Z) - Generative Adversarial Network (GAN) and Enhanced Root Mean Square Error
(ERMSE): Deep Learning for Stock Price Movement Prediction [15.165487282631535]
This paper aims to improve prediction accuracy and minimize forecasting error loss by using Generative Adversarial Networks.
It was found that the Generative Adversarial Network (GAN) has performed well on the enhanced root mean square error to LSTM.
arXiv Detail & Related papers (2021-11-30T18:38:59Z) - N-BEATS neural network for mid-term electricity load forecasting [8.430502131775722]
We show that our proposed deep neural network modeling approach is effective at solving the mid-term electricity load forecasting problem.
It is simple to implement and train, it does not require signal preprocessing, and it is equipped with a forecast bias reduction mechanism.
The empirical study shows that proposed neural network clearly outperforms all competitors in terms of both accuracy and forecast bias.
arXiv Detail & Related papers (2020-09-24T21:48:08Z) - Stochastic Markov Gradient Descent and Training Low-Bit Neural Networks [77.34726150561087]
We introduce Gradient Markov Descent (SMGD), a discrete optimization method applicable to training quantized neural networks.
We provide theoretical guarantees of algorithm performance as well as encouraging numerical results.
arXiv Detail & Related papers (2020-08-25T15:48:15Z) - A Novel Ensemble Deep Learning Model for Stock Prediction Based on Stock
Prices and News [7.578363431637128]
This paper proposes to use sentiment analysis to extract useful information from multiple textual data sources to predict future stock movement.
The blending ensemble model contains two levels. The first level contains two Recurrent Neural Networks (RNNs), one Long-Short Term Memory network (LSTM) and one Gated Recurrent Units network (GRU)
The fully connected neural network is used to ensemble several individual prediction results to further improve the prediction accuracy.
arXiv Detail & Related papers (2020-07-23T15:25:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.