Alpha Mining and Enhancing via Warm Start Genetic Programming for Quantitative Investment
- URL: http://arxiv.org/abs/2412.00896v1
- Date: Sun, 01 Dec 2024 17:13:54 GMT
- Title: Alpha Mining and Enhancing via Warm Start Genetic Programming for Quantitative Investment
- Authors: Weizhe Ren, Yichen Qin, Yang Li,
- Abstract summary: Traditional genetic programming (GP) often struggles in stock alpha factor discovery.<n>We find that GP performs better when focusing on promising regions rather than random searching.
- Score: 3.4196842063159076
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional genetic programming (GP) often struggles in stock alpha factor discovery due to its vast search space, overwhelming computational burden, and sporadic effective alphas. We find that GP performs better when focusing on promising regions rather than random searching. This paper proposes a new GP framework with carefully chosen initialization and structural constraints to enhance search performance and improve the interpretability of the alpha factors. This approach is motivated by and mimics the alpha searching practice and aims to boost the efficiency of such a process. Analysis of 2020-2024 Chinese stock market data shows that our method yields superior out-of-sample prediction results and higher portfolio returns than the benchmark.
Related papers
- AlphaEvolve: A coding agent for scientific and algorithmic discovery [63.13852052551106]
We present AlphaEvolve, an evolutionary coding agent that substantially enhances capabilities of state-of-the-art LLMs.<n>AlphaEvolve orchestrates an autonomous pipeline of LLMs, whose task is to improve an algorithm by making direct changes to the code.<n>We demonstrate the broad applicability of this approach by applying it to a number of important computational problems.
arXiv Detail & Related papers (2025-06-16T06:37:18Z) - Delving into RL for Image Generation with CoT: A Study on DPO vs. GRPO [68.44918104224818]
Autoregressive image generation presents unique challenges distinct from Chain-of-Thought (CoT) reasoning.<n>This study provides the first comprehensive investigation of the GRPO and DPO algorithms in autoregressive image generation.<n>Our findings reveal that GRPO and DPO exhibit distinct advantages, and crucially, that reward models possessing stronger intrinsic generalization capabilities potentially enhance the generalization potential of the applied RL algorithms.
arXiv Detail & Related papers (2025-05-22T17:59:49Z) - Navigating the Alpha Jungle: An LLM-Powered MCTS Framework for Formulaic Factor Mining [8.53606484300001]
This paper introduces a novel framework that integrates Large Language Models (LLMs) with Monte Carlo Tree Search (MCTS)<n>A key innovation is the guidance of MCTS exploration by rich, quantitative feedback from financial backtesting of each candidate factor.<n> Experimental results on real-world stock market data demonstrate that our LLM-based framework outperforms existing methods by mining alphas with superior predictive accuracy and trading performance.
arXiv Detail & Related papers (2025-05-16T11:14:17Z) - QuantFactor REINFORCE: Mining Steady Formulaic Alpha Factors with Variance-bounded REINFORCE [5.560011325936085]
The goal of alpha factor mining is to discover indicative signals of investment opportunities from the historical financial market data of assets.
Recently, a promising framework is proposed for generating formulaic alpha factors using deep reinforcement learning.
arXiv Detail & Related papers (2024-09-08T15:57:58Z) - AlphaForge: A Framework to Mine and Dynamically Combine Formulaic Alpha Factors [14.80394452270726]
This paper proposes a two-stage alpha generating framework AlphaForge, for alpha factor mining and factor combination.
Experiments conducted on real-world datasets demonstrate that our proposed model outperforms contemporary benchmarks in formulaic alpha factor mining.
arXiv Detail & Related papers (2024-06-26T14:34:37Z) - $\text{Alpha}^2$: Discovering Logical Formulaic Alphas using Deep Reinforcement Learning [28.491587815128575]
We propose a novel framework for alpha discovery using deep reinforcement learning (DRL)
A search algorithm guided by DRL navigates through the search space based on value estimates for potential alpha outcomes.
Empirical experiments on real-world stock markets demonstrates $textAlpha2$'s capability to identify a diverse set of logical and effective alphas.
arXiv Detail & Related papers (2024-06-24T10:21:29Z) - Synergistic Formulaic Alpha Generation for Quantitative Trading based on Reinforcement Learning [1.3194391758295114]
This paper proposes a method to enhance existing alpha factor mining approaches by expanding a search space.
We employ information coefficient (IC) and rank information coefficient (Rank IC) as performance evaluation metrics for the model.
arXiv Detail & Related papers (2024-01-05T08:49:13Z) - Discovering General Reinforcement Learning Algorithms with Adversarial
Environment Design [54.39859618450935]
We show that it is possible to meta-learn update rules, with the hope of discovering algorithms that can perform well on a wide range of RL tasks.
Despite impressive initial results from algorithms such as Learned Policy Gradient (LPG), there remains a gap when these algorithms are applied to unseen environments.
In this work, we examine how characteristics of the meta-supervised-training distribution impact the performance of these algorithms.
arXiv Detail & Related papers (2023-10-04T12:52:56Z) - Generating Synergistic Formulaic Alpha Collections via Reinforcement
Learning [20.589583396095225]
We propose a new alpha-mining framework that prioritizes mining a synergistic set of alphas.
We show that our framework is able to achieve higher returns compared to previous approaches.
arXiv Detail & Related papers (2023-05-25T13:41:07Z) - Robustification of Online Graph Exploration Methods [59.50307752165016]
We study a learning-augmented variant of the classical, notoriously hard online graph exploration problem.
We propose an algorithm that naturally integrates predictions into the well-known Nearest Neighbor (NN) algorithm.
arXiv Detail & Related papers (2021-12-10T10:02:31Z) - RANK-NOSH: Efficient Predictor-Based Architecture Search via Non-Uniform
Successive Halving [74.61723678821049]
We propose NOn-uniform Successive Halving (NOSH), a hierarchical scheduling algorithm that terminates the training of underperforming architectures early to avoid wasting budget.
We formulate predictor-based architecture search as learning to rank with pairwise comparisons.
The resulting method - RANK-NOSH, reduces the search budget by 5x while achieving competitive or even better performance than previous state-of-the-art predictor-based methods on various spaces and datasets.
arXiv Detail & Related papers (2021-08-18T07:45:21Z) - Policy Gradient Bayesian Robust Optimization for Imitation Learning [49.881386773269746]
We derive a novel policy gradient-style robust optimization approach, PG-BROIL, to balance expected performance and risk.
Results suggest PG-BROIL can produce a family of behaviors ranging from risk-neutral to risk-averse.
arXiv Detail & Related papers (2021-06-11T16:49:15Z) - Alpha-Refine: Boosting Tracking Performance by Precise Bounding Box
Estimation [85.22775182688798]
This work proposes a novel, flexible, and accurate refinement module called Alpha-Refine.
It can significantly improve the base trackers' box estimation quality.
Experiments on TrackingNet, LaSOT, GOT-10K, and VOT 2020 benchmarks show that our approach significantly improves the base trackers' performance with little extra latency.
arXiv Detail & Related papers (2020-12-12T13:33:25Z) - Implementation Matters in Deep Policy Gradients: A Case Study on PPO and
TRPO [90.90009491366273]
We study the roots of algorithmic progress in deep policy gradient algorithms through a case study on two popular algorithms.
Specifically, we investigate the consequences of "code-level optimizations:"
Our results show that they (a) are responsible for most of PPO's gain in cumulative reward over TRPO, and (b) fundamentally change how RL methods function.
arXiv Detail & Related papers (2020-05-25T16:24:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.