Short-Term Stock Price-Trend Prediction Using Meta-Learning
- URL: http://arxiv.org/abs/2105.13599v1
- Date: Fri, 28 May 2021 06:03:05 GMT
- Title: Short-Term Stock Price-Trend Prediction Using Meta-Learning
- Authors: Shin-Hung Chang, Cheng-Wen Hsu, Hsing-Ying Li, Wei-Sheng Zeng,
Jan-Ming Ho
- Abstract summary: We consider short-term stock price prediction using a meta-learning framework with several convolutional neural networks.
We propose a sliding time horizon to label stocks according to their predicted price trends.
The effectiveness of the proposed meta-learning framework was evaluated by application to the S&P500.
- Score: 1.8899300124593645
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although conventional machine learning algorithms have been widely adopted
for stock-price predictions in recent years, the massive volume of specific
labeled data required are not always available. In contrast, meta-learning
technology uses relatively small amounts of training data, called fast
learners. Such methods are beneficial under conditions of limited data
availability, which often obtain for trend prediction based on time-series data
limited by sparse information. In this study, we consider short-term stock
price prediction using a meta-learning framework with several convolutional
neural networks, including the temporal convolution network, fully
convolutional network, and residual neural network. We propose a sliding time
horizon to label stocks according to their predicted price trends, referred to
as called dynamic k-average labeling, using prediction labels including "rise
plus", "rise", "fall", and "fall plus". The effectiveness of the proposed
meta-learning framework was evaluated by application to the S&P500. The
experimental results show that the inclusion of the proposed meta-learning
framework significantly improved both regular and balanced prediction accuracy
and profitability.
Related papers
- GraphCNNpred: A stock market indices prediction using a Graph based deep learning system [0.0]
We give a graph neural network based convolutional neural network (CNN) model, that can be applied on diverse source of data, in the attempt to extract features to predict the trends of indices of textS&textP 500, NASDAQ, DJI, NYSE, and RUSSEL.
Experiments show that the associated models improve the performance of prediction in all indices over the baseline algorithms by about $4% text to 15%$, in terms of F-measure.
arXiv Detail & Related papers (2024-07-04T09:14:24Z) - F-FOMAML: GNN-Enhanced Meta-Learning for Peak Period Demand Forecasting with Proxy Data [65.6499834212641]
We formulate the demand prediction as a meta-learning problem and develop the Feature-based First-Order Model-Agnostic Meta-Learning (F-FOMAML) algorithm.
By considering domain similarities through task-specific metadata, our model improved generalization, where the excess risk decreases as the number of training tasks increases.
Compared to existing state-of-the-art models, our method demonstrates a notable improvement in demand prediction accuracy, reducing the Mean Absolute Error by 26.24% on an internal vending machine dataset and by 1.04% on the publicly accessible JD.com dataset.
arXiv Detail & Related papers (2024-06-23T21:28:50Z) - Enhancing Mean-Reverting Time Series Prediction with Gaussian Processes:
Functional and Augmented Data Structures in Financial Forecasting [0.0]
We explore the application of Gaussian Processes (GPs) for predicting mean-reverting time series with an underlying structure.
GPs offer the potential to forecast not just the average prediction but the entire probability distribution over a future trajectory.
This is particularly beneficial in financial contexts, where accurate predictions alone may not suffice if incorrect volatility assessments lead to capital losses.
arXiv Detail & Related papers (2024-02-23T06:09:45Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - Augmented Bilinear Network for Incremental Multi-Stock Time-Series
Classification [83.23129279407271]
We propose a method to efficiently retain the knowledge available in a neural network pre-trained on a set of securities.
In our method, the prior knowledge encoded in a pre-trained neural network is maintained by keeping existing connections fixed.
This knowledge is adjusted for the new securities by a set of augmented connections, which are optimized using the new data.
arXiv Detail & Related papers (2022-07-23T18:54:10Z) - Forecasting of Non-Stationary Sales Time Series Using Deep Learning [0.0]
The paper describes the deep learning approach for forecasting non-stationary time series with using time trend correction in a neural network model.
The results show that the forecasting accuracy can be essentially improved for non-stationary sales with time trends using the trend correction block.
arXiv Detail & Related papers (2022-05-23T21:06:27Z) - Bayesian Bilinear Neural Network for Predicting the Mid-price Dynamics
in Limit-Order Book Markets [84.90242084523565]
Traditional time-series econometric methods often appear incapable of capturing the true complexity of the multi-level interactions driving the price dynamics.
By adopting a state-of-the-art second-order optimization algorithm, we train a Bayesian bilinear neural network with temporal attention.
By addressing the use of predictive distributions to analyze errors and uncertainties associated with the estimated parameters and model forecasts, we thoroughly compare our Bayesian model with traditional ML alternatives.
arXiv Detail & Related papers (2022-03-07T18:59:54Z) - Multivariate Anomaly Detection based on Prediction Intervals Constructed
using Deep Learning [0.0]
We benchmark our approach against the oft-preferred well-established statistical models.
We focus on three deep learning architectures, namely, cascaded neural networks, reservoir computing and long short-term memory recurrent neural networks.
arXiv Detail & Related papers (2021-10-07T12:34:31Z) - Bilinear Input Normalization for Neural Networks in Financial
Forecasting [101.89872650510074]
We propose a novel data-driven normalization method for deep neural networks that handle high-frequency financial time-series.
The proposed normalization scheme takes into account the bimodal characteristic of financial time-series.
Our experiments, conducted with state-of-the-arts neural networks and high-frequency data, show significant improvements over other normalization techniques.
arXiv Detail & Related papers (2021-09-01T07:52:03Z) - S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
Networks via Guided Distribution Calibration [74.5509794733707]
We present a novel guided learning paradigm from real-valued to distill binary networks on the final prediction distribution.
Our proposed method can boost the simple contrastive learning baseline by an absolute gain of 5.515% on BNNs.
Our method achieves substantial improvement over the simple contrastive learning baseline, and is even comparable to many mainstream supervised BNN methods.
arXiv Detail & Related papers (2021-02-17T18:59:28Z) - Improved Predictive Deep Temporal Neural Networks with Trend Filtering [22.352437268596674]
We propose a new prediction framework based on deep neural networks and a trend filtering.
We reveal that the predictive performance of deep temporal neural networks improves when the training data is temporally processed by a trend filtering.
arXiv Detail & Related papers (2020-10-16T08:29:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.