Compatible deep neural network framework with financial time series
data, including data preprocessor, neural network model and trading strategy
- URL: http://arxiv.org/abs/2205.08382v1
- Date: Wed, 11 May 2022 20:44:08 GMT
- Title: Compatible deep neural network framework with financial time series
data, including data preprocessor, neural network model and trading strategy
- Authors: Mohammadmahdi Ghahramani, Hamid Esmaeili Najafabadi
- Abstract summary: This research introduces a new deep neural network architecture and a novel idea of how to prepare financial data before feeding them to the model.
Three different datasets are used to evaluate this method, where results indicate that this framework can provide us with profitable and robust predictions.
- Score: 2.347843817145202
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Experience has shown that trading in stock and cryptocurrency markets has the
potential to be highly profitable. In this light, considerable effort has been
recently devoted to investigate how to apply machine learning and deep learning
to interpret and predict market behavior. This research introduces a new deep
neural network architecture and a novel idea of how to prepare financial data
before feeding them to the model. In the data preparation part, the first step
is to generate many features using technical indicators and then apply the
XGBoost model for feature engineering. Splitting data into three categories and
using separate autoencoders, we extract high-level mixed features at the second
step. This data preprocessing is introduced to predict price movements.
Regarding modeling, different convolutional layers, an long short-term memory
unit, and several fully-connected layers have been designed to perform binary
classification. This research also introduces a trading strategy to exploit the
trained model outputs. Three different datasets are used to evaluate this
method, where results indicate that this framework can provide us with
profitable and robust predictions.
Related papers
- AI-Powered Energy Algorithmic Trading: Integrating Hidden Markov Models with Neural Networks [0.0]
This study introduces a new approach that combines Hidden Markov Models (HMM) and neural networks, integrated with Black-Litterman portfolio optimization.
During the COVID period ( 2019-2022), this dual-model approach achieved a 83% return with a Sharpe ratio of 0.77.
arXiv Detail & Related papers (2024-07-29T10:26:52Z) - GraphCNNpred: A stock market indices prediction using a Graph based deep learning system [0.0]
We give a graph neural network based convolutional neural network (CNN) model, that can be applied on diverse source of data, in the attempt to extract features to predict the trends of indices of textS&textP 500, NASDAQ, DJI, NYSE, and RUSSEL.
Experiments show that the associated models improve the performance of prediction in all indices over the baseline algorithms by about $4% text to 15%$, in terms of F-measure.
arXiv Detail & Related papers (2024-07-04T09:14:24Z) - F-FOMAML: GNN-Enhanced Meta-Learning for Peak Period Demand Forecasting with Proxy Data [65.6499834212641]
We formulate the demand prediction as a meta-learning problem and develop the Feature-based First-Order Model-Agnostic Meta-Learning (F-FOMAML) algorithm.
By considering domain similarities through task-specific metadata, our model improved generalization, where the excess risk decreases as the number of training tasks increases.
Compared to existing state-of-the-art models, our method demonstrates a notable improvement in demand prediction accuracy, reducing the Mean Absolute Error by 26.24% on an internal vending machine dataset and by 1.04% on the publicly accessible JD.com dataset.
arXiv Detail & Related papers (2024-06-23T21:28:50Z) - How Much Data are Enough? Investigating Dataset Requirements for Patch-Based Brain MRI Segmentation Tasks [74.21484375019334]
Training deep neural networks reliably requires access to large-scale datasets.
To mitigate both the time and financial costs associated with model development, a clear understanding of the amount of data required to train a satisfactory model is crucial.
This paper proposes a strategic framework for estimating the amount of annotated data required to train patch-based segmentation networks.
arXiv Detail & Related papers (2024-04-04T13:55:06Z) - Augmented Bilinear Network for Incremental Multi-Stock Time-Series
Classification [83.23129279407271]
We propose a method to efficiently retain the knowledge available in a neural network pre-trained on a set of securities.
In our method, the prior knowledge encoded in a pre-trained neural network is maintained by keeping existing connections fixed.
This knowledge is adjusted for the new securities by a set of augmented connections, which are optimized using the new data.
arXiv Detail & Related papers (2022-07-23T18:54:10Z) - Towards Open-World Feature Extrapolation: An Inductive Graph Learning
Approach [80.8446673089281]
We propose a new learning paradigm with graph representation and learning.
Our framework contains two modules: 1) a backbone network (e.g., feedforward neural nets) as a lower model takes features as input and outputs predicted labels; 2) a graph neural network as an upper model learns to extrapolate embeddings for new features via message passing over a feature-data graph built from observed data.
arXiv Detail & Related papers (2021-10-09T09:02:45Z) - Bilinear Input Normalization for Neural Networks in Financial
Forecasting [101.89872650510074]
We propose a novel data-driven normalization method for deep neural networks that handle high-frequency financial time-series.
The proposed normalization scheme takes into account the bimodal characteristic of financial time-series.
Our experiments, conducted with state-of-the-arts neural networks and high-frequency data, show significant improvements over other normalization techniques.
arXiv Detail & Related papers (2021-09-01T07:52:03Z) - Evaluating data augmentation for financial time series classification [85.38479579398525]
We evaluate several augmentation methods applied to stocks datasets using two state-of-the-art deep learning models.
For a relatively small dataset augmentation methods achieve up to $400%$ improvement in risk adjusted return performance.
For a larger stock dataset augmentation methods achieve up to $40%$ improvement.
arXiv Detail & Related papers (2020-10-28T17:53:57Z) - A Deep Learning Framework for Predicting Digital Asset Price Movement
from Trade-by-trade Data [20.392440676633573]
This paper presents a framework that predicts price movement of cryptocurrencies from trade-by-trade data.
The model is trained to achieve high performance on nearly a year of trade-by-trade data.
In a realistic trading simulation setting, the prediction made by the model could be easily monetized.
arXiv Detail & Related papers (2020-10-11T10:42:02Z) - Deep Learning Based on Generative Adversarial and Convolutional Neural
Networks for Financial Time Series Predictions [0.0]
This paper proposes the implementation of a generative adversarial network (GAN), which is composed by a bi-directional Long short-term memory (LSTM) and convolutional neural network(CNN)
Bi-LSTM-CNN generates synthetic data that agree with existing real financial data so the features of stocks with positive or negative trends can be retained to predict future trends of a stock.
arXiv Detail & Related papers (2020-08-08T08:42:46Z) - A Novel Ensemble Deep Learning Model for Stock Prediction Based on Stock
Prices and News [7.578363431637128]
This paper proposes to use sentiment analysis to extract useful information from multiple textual data sources to predict future stock movement.
The blending ensemble model contains two levels. The first level contains two Recurrent Neural Networks (RNNs), one Long-Short Term Memory network (LSTM) and one Gated Recurrent Units network (GRU)
The fully connected neural network is used to ensemble several individual prediction results to further improve the prediction accuracy.
arXiv Detail & Related papers (2020-07-23T15:25:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.