Generating Synergistic Formulaic Alpha Collections via Reinforcement
Learning
- URL: http://arxiv.org/abs/2306.12964v1
- Date: Thu, 25 May 2023 13:41:07 GMT
- Title: Generating Synergistic Formulaic Alpha Collections via Reinforcement
Learning
- Authors: Shuo Yu, Hongyan Xue, Xiang Ao, Feiyang Pan, Jia He, Dandan Tu, and
Qing He
- Abstract summary: We propose a new alpha-mining framework that prioritizes mining a synergistic set of alphas.
We show that our framework is able to achieve higher returns compared to previous approaches.
- Score: 20.589583396095225
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the field of quantitative trading, it is common practice to transform raw
historical stock data into indicative signals for the market trend. Such
signals are called alpha factors. Alphas in formula forms are more
interpretable and thus favored by practitioners concerned with risk. In
practice, a set of formulaic alphas is often used together for better modeling
precision, so we need to find synergistic formulaic alpha sets that work well
together. However, most traditional alpha generators mine alphas one by one
separately, overlooking the fact that the alphas would be combined later. In
this paper, we propose a new alpha-mining framework that prioritizes mining a
synergistic set of alphas, i.e., it directly uses the performance of the
downstream combination model to optimize the alpha generator. Our framework
also leverages the strong exploratory capabilities of reinforcement
learning~(RL) to better explore the vast search space of formulaic alphas. The
contribution to the combination models' performance is assigned to be the
return used in the RL process, driving the alpha generator to find better
alphas that improve upon the current set. Experimental evaluations on
real-world stock market data demonstrate both the effectiveness and the
efficiency of our framework for stock trend forecasting. The investment
simulation results show that our framework is able to achieve higher returns
compared to previous approaches.
Related papers
- AlphaEvolve: A coding agent for scientific and algorithmic discovery [63.13852052551106]
We present AlphaEvolve, an evolutionary coding agent that substantially enhances capabilities of state-of-the-art LLMs.<n>AlphaEvolve orchestrates an autonomous pipeline of LLMs, whose task is to improve an algorithm by making direct changes to the code.<n>We demonstrate the broad applicability of this approach by applying it to a number of important computational problems.
arXiv Detail & Related papers (2025-06-16T06:37:18Z) - AlphaAgent: LLM-Driven Alpha Mining with Regularized Exploration to Counteract Alpha Decay [43.50447460231601]
We propose AlphaAgent, an autonomous framework that integrates Large Language Models with ad hoc regularizations for mining decay-resistant alpha factors.
AlphaAgent consistently delivers significant alpha in Chinese CSI 500 and US S&P 500 markets over the past four years.
Notably, AlphaAgent showcases remarkable resistance to alpha decay, elevating the potential for yielding powerful factors.
arXiv Detail & Related papers (2025-02-24T02:56:46Z) - MM-RLHF: The Next Step Forward in Multimodal LLM Alignment [59.536850459059856]
We introduce MM-RLHF, a dataset containing $mathbf120k$ fine-grained, human-annotated preference comparison pairs.
We propose several key innovations to improve the quality of reward models and the efficiency of alignment algorithms.
Our approach is rigorously evaluated across $mathbf10$ distinct dimensions and $mathbf27$ benchmarks.
arXiv Detail & Related papers (2025-02-14T18:59:51Z) - Improving AlphaFlow for Efficient Protein Ensembles Generation [64.10918970280603]
We propose a feature-conditioned generative model called AlphaFlow-Lit to realize efficient protein ensembles generation.
AlphaFlow-Lit performs on-par with AlphaFlow and surpasses its distilled version without pretraining, all while achieving a significant sampling acceleration of around 47 times.
arXiv Detail & Related papers (2024-07-08T13:36:43Z) - AlphaForge: A Framework to Mine and Dynamically Combine Formulaic Alpha Factors [14.80394452270726]
This paper proposes a two-stage alpha generating framework AlphaForge, for alpha factor mining and factor combination.
Experiments conducted on real-world datasets demonstrate that our proposed model outperforms contemporary benchmarks in formulaic alpha factor mining.
arXiv Detail & Related papers (2024-06-26T14:34:37Z) - $\text{Alpha}^2$: Discovering Logical Formulaic Alphas using Deep Reinforcement Learning [28.491587815128575]
We propose a novel framework for alpha discovery using deep reinforcement learning (DRL)
A search algorithm guided by DRL navigates through the search space based on value estimates for potential alpha outcomes.
Empirical experiments on real-world stock markets demonstrates $textAlpha2$'s capability to identify a diverse set of logical and effective alphas.
arXiv Detail & Related papers (2024-06-24T10:21:29Z) - Weak-to-Strong Extrapolation Expedites Alignment [135.12769233630362]
We propose a method called ExPO to boost models' alignment with human preference.
We demonstrate that ExPO consistently improves off-the-shelf DPO/RLHF models.
We shed light on the essence of ExPO amplifying the reward signal learned during alignment training.
arXiv Detail & Related papers (2024-04-25T17:39:50Z) - Synergistic Formulaic Alpha Generation for Quantitative Trading based on Reinforcement Learning [1.3194391758295114]
This paper proposes a method to enhance existing alpha factor mining approaches by expanding a search space.
We employ information coefficient (IC) and rank information coefficient (Rank IC) as performance evaluation metrics for the model.
arXiv Detail & Related papers (2024-01-05T08:49:13Z) - DiffusionMat: Alpha Matting as Sequential Refinement Learning [87.76572845943929]
DiffusionMat is an image matting framework that employs a diffusion model for the transition from coarse to refined alpha mattes.
A correction module adjusts the output at each denoising step, ensuring that the final result is consistent with the input image's structures.
We evaluate our model across several image matting benchmarks, and the results indicate that DiffusionMat consistently outperforms existing methods.
arXiv Detail & Related papers (2023-11-22T17:16:44Z) - Alpha-GPT: Human-AI Interactive Alpha Mining for Quantitative Investment [9.424699345940725]
We propose a new alpha mining paradigm by introducing human-AI interaction.
We also develop Alpha-GPT, a new interactive alpha mining system framework.
arXiv Detail & Related papers (2023-07-31T16:40:06Z) - AlphaEvolve: A Learning Framework to Discover Novel Alphas in
Quantitative Investment [16.27557073668891]
We introduce a new class of alphas to model scalar, vector, and matrix features.
The new alphas predict returns with high accuracy and can be mined into a weakly correlated set.
We propose a novel alpha mining framework based on AutoML, called AlphaEvolve, to generate the new alphas.
arXiv Detail & Related papers (2021-03-30T09:28:41Z) - Alpha-Refine: Boosting Tracking Performance by Precise Bounding Box
Estimation [85.22775182688798]
This work proposes a novel, flexible, and accurate refinement module called Alpha-Refine.
It can significantly improve the base trackers' box estimation quality.
Experiments on TrackingNet, LaSOT, GOT-10K, and VOT 2020 benchmarks show that our approach significantly improves the base trackers' performance with little extra latency.
arXiv Detail & Related papers (2020-12-12T13:33:25Z) - Alpha-Refine: Boosting Tracking Performance by Precise Bounding Box
Estimation [87.53808756910452]
We propose a novel, flexible and accurate refinement module called Alpha-Refine.
It exploits a precise pixel-wise correlation layer together with a spatial-aware non-local layer to fuse features and can predict three complementary outputs: bounding box, corners and mask.
We apply the proposed Alpha-Refine module to five famous and state-of-the-art base trackers: DiMP, ATOM, SiamRPN++, RTMDNet and ECO.
arXiv Detail & Related papers (2020-07-04T07:02:25Z) - Reinforcement Learning with Augmented Data [97.42819506719191]
We present Reinforcement Learning with Augmented Data (RAD), a simple plug-and-play module that can enhance most RL algorithms.
We show that augmentations such as random translate, crop, color jitter, patch cutout, random convolutions, and amplitude scale can enable simple RL algorithms to outperform complex state-of-the-art methods.
arXiv Detail & Related papers (2020-04-30T17:35:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.