Stock Trading Volume Prediction with Dual-Process Meta-Learning
- URL: http://arxiv.org/abs/2211.01762v1
- Date: Tue, 11 Oct 2022 13:35:20 GMT
- Title: Stock Trading Volume Prediction with Dual-Process Meta-Learning
- Authors: Ruibo Chen, Wei Li, Zhiyuan Zhang, Ruihan Bao, Keiko Harimoto, Xu Sun
- Abstract summary: We propose a dual-process meta-learning method that treats the prediction of each stock as one task under the meta-learning framework.
Our method can model the common pattern behind different stocks with a meta-learner, while modeling the specific pattern for each stock across time spans with stock-dependent parameters.
- Score: 20.588377161807916
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Volume prediction is one of the fundamental objectives in the Fintech area,
which is helpful for many downstream tasks, e.g., algorithmic trading. Previous
methods mostly learn a universal model for different stocks. However, this kind
of practice omits the specific characteristics of individual stocks by applying
the same set of parameters for different stocks. On the other hand, learning
different models for each stock would face data sparsity or cold start problems
for many stocks with small capitalization. To take advantage of the data scale
and the various characteristics of individual stocks, we propose a dual-process
meta-learning method that treats the prediction of each stock as one task under
the meta-learning framework. Our method can model the common pattern behind
different stocks with a meta-learner, while modeling the specific pattern for
each stock across time spans with stock-dependent parameters. Furthermore, we
propose to mine the pattern of each stock in the form of a latent variable
which is then used for learning the parameters for the prediction module. This
makes the prediction procedure aware of the data pattern. Extensive experiments
on volume predictions show that our method can improve the performance of
various baseline models. Further analyses testify the effectiveness of our
proposed meta-learning framework.
Related papers
- Context is Key: A Benchmark for Forecasting with Essential Textual Information [87.3175915185287]
"Context is Key" (CiK) is a time series forecasting benchmark that pairs numerical data with diverse types of carefully crafted textual context.
We evaluate a range of approaches, including statistical models, time series foundation models, and LLM-based forecasters.
Our experiments highlight the importance of incorporating contextual information, demonstrate surprising performance when using LLM-based forecasting models, and also reveal some of their critical shortcomings.
arXiv Detail & Related papers (2024-10-24T17:56:08Z) - Some variation of COBRA in sequential learning setup [0.0]
We use specific data preprocessing techniques which makes a radical change in the behaviour of prediction.
Our proposed methodologies outperform all state-of-the-art comparative models.
We illustrate the methodologies through eight time series datasets from three categories: cryptocurrency, stock index, and short-term load forecasting.
arXiv Detail & Related papers (2024-04-07T17:41:02Z) - EAMDrift: An interpretable self retrain model for time series [0.0]
We present EAMDrift, a novel method that combines forecasts from multiple individual predictors by weighting each prediction according to a performance metric.
EAMDrift is designed to automatically adapt to out-of-distribution patterns in data and identify the most appropriate models to use at each moment.
Our study on real-world datasets shows that EAMDrift outperforms individual baseline models by 20% and achieves comparable accuracy results to non-interpretable ensemble models.
arXiv Detail & Related papers (2023-05-31T13:25:26Z) - An Additive Instance-Wise Approach to Multi-class Model Interpretation [53.87578024052922]
Interpretable machine learning offers insights into what factors drive a certain prediction of a black-box system.
Existing methods mainly focus on selecting explanatory input features, which follow either locally additive or instance-wise approaches.
This work exploits the strengths of both methods and proposes a global framework for learning local explanations simultaneously for multiple target classes.
arXiv Detail & Related papers (2022-07-07T06:50:27Z) - Stock Index Prediction using Cointegration test and Quantile Loss [0.0]
We propose a method that can get better performance in terms of returns when selecting informative factors.
We compare the two RNN variants with quantile loss with only five factors obtained through the cointegration test.
Our experimental results show that our proposed method outperforms the other conventional approaches.
arXiv Detail & Related papers (2021-09-29T16:20:29Z) - Learning an Explicit Hyperparameter Prediction Function Conditioned on
Tasks [62.63852372239708]
Meta learning aims to learn the learning methodology for machine learning from observed tasks, so as to generalize to new query tasks.
We interpret such learning methodology as learning an explicit hyper- parameter prediction function shared by all training tasks.
Such setting guarantees that the meta-learned learning methodology is able to flexibly fit diverse query tasks.
arXiv Detail & Related papers (2021-07-06T04:05:08Z) - Learning Multiple Stock Trading Patterns with Temporal Routing Adaptor
and Optimal Transport [8.617532047238461]
We propose a novel architecture, Temporal Adaptor (TRA), to empower existing stock prediction models with the ability to model multiple stock trading patterns.
TRA is a lightweight module that consists of a set independent predictors for learning multiple patterns as well as a router to dispatch samples to different predictors.
We show that the proposed method can improve information coefficient (IC) from 0.053 to 0.059 and 0.051 to 0.056 respectively.
arXiv Detail & Related papers (2021-06-24T12:19:45Z) - Trajectory-wise Multiple Choice Learning for Dynamics Generalization in
Reinforcement Learning [137.39196753245105]
We present a new model-based reinforcement learning algorithm that learns a multi-headed dynamics model for dynamics generalization.
We incorporate context learning, which encodes dynamics-specific information from past experiences into the context latent vector.
Our method exhibits superior zero-shot generalization performance across a variety of control tasks, compared to state-of-the-art RL methods.
arXiv Detail & Related papers (2020-10-26T03:20:42Z) - A Primal-Dual Subgradient Approachfor Fair Meta Learning [23.65344558042896]
Few shot meta-learning is well-known with its fast-adapted capability and accuracy generalization onto unseen tasks.
We propose a Primal-Dual Fair Meta-learning framework, namely PDFM, which learns to train fair machine learning models using only a few examples.
arXiv Detail & Related papers (2020-09-26T19:47:38Z) - Multi-Task Learning for Dense Prediction Tasks: A Survey [87.66280582034838]
Multi-task learning (MTL) techniques have shown promising results w.r.t. performance, computations and/or memory footprint.
We provide a well-rounded view on state-of-the-art deep learning approaches for MTL in computer vision.
arXiv Detail & Related papers (2020-04-28T09:15:50Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.