$\text{Alpha}^2$: Discovering Logical Formulaic Alphas using Deep Reinforcement Learning
- URL: http://arxiv.org/abs/2406.16505v2
- Date: Wed, 26 Jun 2024 07:40:12 GMT
- Title: $\text{Alpha}^2$: Discovering Logical Formulaic Alphas using Deep Reinforcement Learning
- Authors: Feng Xu, Yan Yin, Xinyu Zhang, Tianyuan Liu, Shengyi Jiang, Zongzhang Zhang,
- Abstract summary: We propose a novel framework for alpha discovery using deep reinforcement learning (DRL)
A search algorithm guided by DRL navigates through the search space based on value estimates for potential alpha outcomes.
Empirical experiments on real-world stock markets demonstrates $textAlpha2$'s capability to identify a diverse set of logical and effective alphas.
- Score: 28.491587815128575
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Alphas are pivotal in providing signals for quantitative trading. The industry highly values the discovery of formulaic alphas for their interpretability and ease of analysis, compared with the expressive yet overfitting-prone black-box alphas. In this work, we focus on discovering formulaic alphas. Prior studies on automatically generating a collection of formulaic alphas were mostly based on genetic programming (GP), which is known to suffer from the problems of being sensitive to the initial population, converting to local optima, and slow computation speed. Recent efforts employing deep reinforcement learning (DRL) for alpha discovery have not fully addressed key practical considerations such as alpha correlations and validity, which are crucial for their effectiveness. In this work, we propose a novel framework for alpha discovery using DRL by formulating the alpha discovery process as program construction. Our agent, $\text{Alpha}^2$, assembles an alpha program optimized for an evaluation metric. A search algorithm guided by DRL navigates through the search space based on value estimates for potential alpha outcomes. The evaluation metric encourages both the performance and the diversity of alphas for a better final trading strategy. Our formulation of searching alphas also brings the advantage of pre-calculation dimensional analysis, ensuring the logical soundness of alphas, and pruning the vast search space to a large extent. Empirical experiments on real-world stock markets demonstrates $\text{Alpha}^2$'s capability to identify a diverse set of logical and effective alphas, which significantly improves the performance of the final trading strategy. The code of our method is available at https://github.com/x35f/alpha2.
Related papers
- AlphaEvolve: A coding agent for scientific and algorithmic discovery [63.13852052551106]
We present AlphaEvolve, an evolutionary coding agent that substantially enhances capabilities of state-of-the-art LLMs.<n>AlphaEvolve orchestrates an autonomous pipeline of LLMs, whose task is to improve an algorithm by making direct changes to the code.<n>We demonstrate the broad applicability of this approach by applying it to a number of important computational problems.
arXiv Detail & Related papers (2025-06-16T06:37:18Z) - AlphaAgent: LLM-Driven Alpha Mining with Regularized Exploration to Counteract Alpha Decay [43.50447460231601]
We propose AlphaAgent, an autonomous framework that integrates Large Language Models with ad hoc regularizations for mining decay-resistant alpha factors.
AlphaAgent consistently delivers significant alpha in Chinese CSI 500 and US S&P 500 markets over the past four years.
Notably, AlphaAgent showcases remarkable resistance to alpha decay, elevating the potential for yielding powerful factors.
arXiv Detail & Related papers (2025-02-24T02:56:46Z) - Alpha Mining and Enhancing via Warm Start Genetic Programming for Quantitative Investment [3.4196842063159076]
Traditional genetic programming (GP) often struggles in stock alpha factor discovery.
We find that GP performs better when focusing on promising regions rather than random searching.
arXiv Detail & Related papers (2024-12-01T17:13:54Z) - RAGLAB: A Modular and Research-Oriented Unified Framework for Retrieval-Augmented Generation [54.707460684650584]
Large Language Models (LLMs) demonstrate human-level capabilities in dialogue, reasoning, and knowledge retention.
Current research addresses this bottleneck by equipping LLMs with external knowledge, a technique known as Retrieval Augmented Generation (RAG)
RAGLAB is a modular and research-oriented open-source library that reproduces 6 existing algorithms and provides a comprehensive ecosystem for investigating RAG algorithms.
arXiv Detail & Related papers (2024-08-21T07:20:48Z) - Improving AlphaFlow for Efficient Protein Ensembles Generation [64.10918970280603]
We propose a feature-conditioned generative model called AlphaFlow-Lit to realize efficient protein ensembles generation.
AlphaFlow-Lit performs on-par with AlphaFlow and surpasses its distilled version without pretraining, all while achieving a significant sampling acceleration of around 47 times.
arXiv Detail & Related papers (2024-07-08T13:36:43Z) - AlphaForge: A Framework to Mine and Dynamically Combine Formulaic Alpha Factors [14.80394452270726]
This paper proposes a two-stage alpha generating framework AlphaForge, for alpha factor mining and factor combination.
Experiments conducted on real-world datasets demonstrate that our proposed model outperforms contemporary benchmarks in formulaic alpha factor mining.
arXiv Detail & Related papers (2024-06-26T14:34:37Z) - Synergistic Formulaic Alpha Generation for Quantitative Trading based on Reinforcement Learning [1.3194391758295114]
This paper proposes a method to enhance existing alpha factor mining approaches by expanding a search space.
We employ information coefficient (IC) and rank information coefficient (Rank IC) as performance evaluation metrics for the model.
arXiv Detail & Related papers (2024-01-05T08:49:13Z) - Alpha-GPT: Human-AI Interactive Alpha Mining for Quantitative Investment [9.424699345940725]
We propose a new alpha mining paradigm by introducing human-AI interaction.
We also develop Alpha-GPT, a new interactive alpha mining system framework.
arXiv Detail & Related papers (2023-07-31T16:40:06Z) - Generating Synergistic Formulaic Alpha Collections via Reinforcement
Learning [20.589583396095225]
We propose a new alpha-mining framework that prioritizes mining a synergistic set of alphas.
We show that our framework is able to achieve higher returns compared to previous approaches.
arXiv Detail & Related papers (2023-05-25T13:41:07Z) - AlphaEvolve: A Learning Framework to Discover Novel Alphas in
Quantitative Investment [16.27557073668891]
We introduce a new class of alphas to model scalar, vector, and matrix features.
The new alphas predict returns with high accuracy and can be mined into a weakly correlated set.
We propose a novel alpha mining framework based on AutoML, called AlphaEvolve, to generate the new alphas.
arXiv Detail & Related papers (2021-03-30T09:28:41Z) - Alpha-Refine: Boosting Tracking Performance by Precise Bounding Box
Estimation [85.22775182688798]
This work proposes a novel, flexible, and accurate refinement module called Alpha-Refine.
It can significantly improve the base trackers' box estimation quality.
Experiments on TrackingNet, LaSOT, GOT-10K, and VOT 2020 benchmarks show that our approach significantly improves the base trackers' performance with little extra latency.
arXiv Detail & Related papers (2020-12-12T13:33:25Z) - Discovering Reinforcement Learning Algorithms [53.72358280495428]
Reinforcement learning algorithms update an agent's parameters according to one of several possible rules.
This paper introduces a new meta-learning approach that discovers an entire update rule.
It includes both 'what to predict' (e.g. value functions) and 'how to learn from it' by interacting with a set of environments.
arXiv Detail & Related papers (2020-07-17T07:38:39Z) - SUNRISE: A Simple Unified Framework for Ensemble Learning in Deep
Reinforcement Learning [102.78958681141577]
We present SUNRISE, a simple unified ensemble method, which is compatible with various off-policy deep reinforcement learning algorithms.
SUNRISE integrates two key ingredients: (a) ensemble-based weighted Bellman backups, which re-weight target Q-values based on uncertainty estimates from a Q-ensemble, and (b) an inference method that selects actions using the highest upper-confidence bounds for efficient exploration.
arXiv Detail & Related papers (2020-07-09T17:08:44Z) - Alpha-Refine: Boosting Tracking Performance by Precise Bounding Box
Estimation [87.53808756910452]
We propose a novel, flexible and accurate refinement module called Alpha-Refine.
It exploits a precise pixel-wise correlation layer together with a spatial-aware non-local layer to fuse features and can predict three complementary outputs: bounding box, corners and mask.
We apply the proposed Alpha-Refine module to five famous and state-of-the-art base trackers: DiMP, ATOM, SiamRPN++, RTMDNet and ECO.
arXiv Detail & Related papers (2020-07-04T07:02:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.