Deep Stock Predictions
- URL: http://arxiv.org/abs/2006.04992v1
- Date: Mon, 8 Jun 2020 23:37:47 GMT
- Title: Deep Stock Predictions
- Authors: Akash Doshi, Alexander Issa, Puneet Sachdeva, Sina Rafati, Somnath
Rakshit
- Abstract summary: We consider the design of a trading strategy that performs portfolio optimization using Long Short Term Memory (LSTM) neural networks.
We then customize the loss function used to train the LSTM to increase the profit earned.
We find the LSTM model with the customized loss function to have an improved performance in the training bot over a regressive baseline such as ARIMA.
- Score: 58.720142291102135
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Forecasting stock prices can be interpreted as a time series prediction
problem, for which Long Short Term Memory (LSTM) neural networks are often used
due to their architecture specifically built to solve such problems. In this
paper, we consider the design of a trading strategy that performs portfolio
optimization using the LSTM stock price prediction for four different
companies. We then customize the loss function used to train the LSTM to
increase the profit earned. Moreover, we propose a data driven approach for
optimal selection of window length and multi-step prediction length, and
consider the addition of analyst calls as technical indicators to a multi-stack
Bidirectional LSTM strengthened by the addition of Attention units. We find the
LSTM model with the customized loss function to have an improved performance in
the training bot over a regressive baseline such as ARIMA, while the addition
of analyst call does improve the performance for certain datasets.
Related papers
- Scaling Laws for Predicting Downstream Performance in LLMs [75.28559015477137]
This work focuses on the pre-training loss as a more-efficient metric for performance estimation.
We extend the power law analytical function to predict domain-specific pre-training loss based on FLOPs across data sources.
We employ a two-layer neural network to model the non-linear relationship between multiple domain-specific loss and downstream performance.
arXiv Detail & Related papers (2024-10-11T04:57:48Z) - F-FOMAML: GNN-Enhanced Meta-Learning for Peak Period Demand Forecasting with Proxy Data [65.6499834212641]
We formulate the demand prediction as a meta-learning problem and develop the Feature-based First-Order Model-Agnostic Meta-Learning (F-FOMAML) algorithm.
By considering domain similarities through task-specific metadata, our model improved generalization, where the excess risk decreases as the number of training tasks increases.
Compared to existing state-of-the-art models, our method demonstrates a notable improvement in demand prediction accuracy, reducing the Mean Absolute Error by 26.24% on an internal vending machine dataset and by 1.04% on the publicly accessible JD.com dataset.
arXiv Detail & Related papers (2024-06-23T21:28:50Z) - Gated recurrent neural network with TPE Bayesian optimization for enhancing stock index prediction accuracy [0.0]
The aim is to improve the prediction accuracy of the next day's closing price of the NIFTY 50 index, a prominent Indian stock market index.
A combination of eight influential factors is carefully chosen from fundamental stock data, technical indicators, crude oil price, and macroeconomic data to train the models.
arXiv Detail & Related papers (2024-06-02T06:39:01Z) - Enhancing Financial Data Visualization for Investment Decision-Making [0.04096453902709291]
This paper delves into the potential of Long Short-Term Memory (LSTM) networks for predicting stock dynamics.
The study incorporates multiple features to enhance LSTM's capacity in capturing complex patterns.
The meticulously crafted LSTM incorporates crucial price and volume attributes over a 25-day time step.
arXiv Detail & Related papers (2023-12-09T07:53:25Z) - ResNLS: An Improved Model for Stock Price Forecasting [1.2039469573641217]
We introduce a hybrid model that improves stock price prediction by emphasizing the dependencies between adjacent stock prices.
In predicting the SSE Composite Index, our experiment reveals that when the closing price data for the previous 5 consecutive trading days is used as the input, the performance of the model (ResNLS-5) is optimal.
It also demonstrates at least a 20% improvement over the current state-of-the-art baselines.
arXiv Detail & Related papers (2023-12-02T03:55:37Z) - Scaling Relationship on Learning Mathematical Reasoning with Large
Language Models [75.29595679428105]
We investigate how the pre-training loss, supervised data amount, and augmented data amount influence the reasoning performances of a supervised LLM.
We find that rejection samples from multiple models push LLaMA-7B to an accuracy of 49.3% on GSM8K which outperforms the supervised fine-tuning (SFT) accuracy of 35.9% significantly.
arXiv Detail & Related papers (2023-08-03T15:34:01Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - Stock Price Prediction Using Temporal Graph Model with Value Chain Data [3.1641827542160805]
We introduce a neural network-based stock return prediction method, the Long Short-Term Memory Graph Convolutional Neural Network (LSTM-GCN) model.
Our experiments demonstrate that the LSTM-GCN model can capture additional information from value chain data that are not fully reflected in price data.
arXiv Detail & Related papers (2023-03-07T17:24:04Z) - Long Short-Term Memory Neural Network for Financial Time Series [0.0]
We present an ensemble of independent and parallel long short-term memory neural networks for the prediction of stock price movement.
With a straightforward trading strategy, comparisons with a randomly chosen portfolio and a portfolio containing all the stocks in the index show that the portfolio resulting from the LSTM ensemble provides better average daily returns and higher cumulative returns over time.
arXiv Detail & Related papers (2022-01-20T15:17:26Z) - Learning representations with end-to-end models for improved remaining
useful life prognostics [64.80885001058572]
The remaining Useful Life (RUL) of equipment is defined as the duration between the current time and its failure.
We propose an end-to-end deep learning model based on multi-layer perceptron and long short-term memory layers (LSTM) to predict the RUL.
We will discuss how the proposed end-to-end model is able to achieve such good results and compare it to other deep learning and state-of-the-art methods.
arXiv Detail & Related papers (2021-04-11T16:45:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.