Learning from Expert Factors: Trajectory-level Reward Shaping for Formulaic Alpha Mining
- URL: http://arxiv.org/abs/2507.20263v1
- Date: Sun, 27 Jul 2025 13:14:48 GMT
- Title: Learning from Expert Factors: Trajectory-level Reward Shaping for Formulaic Alpha Mining
- Authors: Junjie Zhao, Chengxi Zhang, Chenkai Wang, Peng Yang,
- Abstract summary: Reinforcement learning has successfully automated the complex process of mining formulaic alpha factors, for creating interpretable and profitable investment strategies.<n>Existing methods are hampered by the sparse rewards given the underlying Markov Decision Process.<n>To address this, Trajectory-level Rewarding (TLRS), a novel reward shaping method, is proposed.
- Score: 5.560011325936085
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reinforcement learning (RL) has successfully automated the complex process of mining formulaic alpha factors, for creating interpretable and profitable investment strategies. However, existing methods are hampered by the sparse rewards given the underlying Markov Decision Process. This inefficiency limits the exploration of the vast symbolic search space and destabilizes the training process. To address this, Trajectory-level Reward Shaping (TLRS), a novel reward shaping method, is proposed. TLRS provides dense, intermediate rewards by measuring the subsequence-level similarity between partially generated expressions and a set of expert-designed formulas. Furthermore, a reward centering mechanism is introduced to reduce training variance. Extensive experiments on six major Chinese and U.S. stock indices show that TLRS significantly improves the predictive power of mined factors, boosting the Rank Information Coefficient by 9.29% over existing potential-based shaping algorithms. Notably, TLRS achieves a major leap in computational efficiency by reducing its time complexity with respect to the feature dimension from linear to constant, which is a significant improvement over distance-based baselines.
Related papers
- Good Learners Think Their Thinking: Generative PRM Makes Large Reasoning Model More Efficient Math Learner [31.033131727230277]
Large reasoning models (LRMs) have recently shown promise in solving complex math problems when optimized with Reinforcement Learning (RL)<n>We propose a novel intrinsic signal-driven generative process evaluation mechanism operating at the thought level to address major bottlenecks in RL-based training.<n>Experiments on 1.5B and 7B parameter LRMs demonstrate that our method achieves higher problem-solving accuracy with significantly fewer training samples than outcome-only reward baselines.
arXiv Detail & Related papers (2025-07-31T07:54:58Z) - Scaling Up RL: Unlocking Diverse Reasoning in LLMs via Prolonged Training [121.5858973157225]
We investigate the effects of prolonged reinforcement learning on a small language model across a diverse set of reasoning domains.<n>We introduce controlled KL regularization, clipping ratio, and periodic reference policy resets as critical components for unlocking long-term performance gains.<n>Our model achieves significant improvements over strong baselines, including +14.7% on math, +13.9% on coding, and +54.8% on logic puzzle tasks.
arXiv Detail & Related papers (2025-07-16T17:59:24Z) - Outcome-Based Online Reinforcement Learning: Algorithms and Fundamental Limits [58.63897489864948]
Reinforcement learning with outcome-based feedback faces a fundamental challenge.<n>How do we assign credit to the right actions?<n>This paper provides the first comprehensive analysis of this problem in online RL with general function approximation.
arXiv Detail & Related papers (2025-05-26T17:44:08Z) - Reinforced Latent Reasoning for LLM-based Recommendation [83.18146814163308]
Large Language Models (LLMs) have demonstrated impressive reasoning capabilities in complex problem-solving tasks.<n>Existing methods typically rely on fine-tuning with explicit chain-of-thought (CoT) data.<n>In this work, we explore an alternative approach that shifts from explicit CoT reasoning to compact, information-dense latent reasoning.
arXiv Detail & Related papers (2025-05-25T11:03:45Z) - Navigate the Unknown: Enhancing LLM Reasoning with Intrinsic Motivation Guided Exploration [33.807927649100805]
Reinforcement learning (RL) has emerged as a pivotal method for improving the reasoning capabilities of Large Language Models (LLMs)<n>RL approaches face critical limitations due to their reliance on sparse outcome-based rewards and inadequate mechanisms for incentivizing exploration.<n>We propose an Intrinsic Motivation guidEd exploratioN meThOd foR LLM Reasoning (i-MENTOR)<n>i-MENTOR introduces three key innovations: trajectory-aware exploration rewards that mitigate bias in token-level strategies; dynamic reward scaling to stabilize exploration and exploitation in large action spaces; and advantage-preserving reward implementation that maintains
arXiv Detail & Related papers (2025-05-23T08:30:28Z) - Navigating the Alpha Jungle: An LLM-Powered MCTS Framework for Formulaic Factor Mining [8.53606484300001]
This paper introduces a novel framework that integrates Large Language Models (LLMs) with Monte Carlo Tree Search (MCTS)<n>A key innovation is the guidance of MCTS exploration by rich, quantitative feedback from financial backtesting of each candidate factor.<n> Experimental results on real-world stock market data demonstrate that our LLM-based framework outperforms existing methods by mining alphas with superior predictive accuracy and trading performance.
arXiv Detail & Related papers (2025-05-16T11:14:17Z) - Rethinking LLM Advancement: Compute-Dependent and Independent Paths to Progress [10.461430685627857]
This study evaluates whether large language models can advance through algorithmic innovation in compute-constrained environments.<n>We propose a novel framework distinguishing compute-dependent innovations--which yield disproportionate benefits at high compute--from compute-independent innovations.
arXiv Detail & Related papers (2025-05-07T02:26:17Z) - RL-PINNs: Reinforcement Learning-Driven Adaptive Sampling for Efficient Training of PINNs [0.0]
Physics-Informed Neural Networks (PINNs) have emerged as a powerful framework for solving partial differential equations (PDEs)<n>Their performance heavily relies on the strategy used to select training points.<n>We propose RL-PINNs, a reinforcement learning-driven adaptive sampling framework that enables efficient training with only a single round of sampling.
arXiv Detail & Related papers (2025-04-17T13:50:55Z) - VinePPO: Refining Credit Assignment in RL Training of LLMs [66.80143024475635]
We propose VinePPO, a straightforward approach that leverages the flexibility of language environments to compute unbiased Monte Carlo-based estimates.<n>Our method consistently outperforms PPO and other baselines across MATH and GSM8K datasets in less wall-clock time.
arXiv Detail & Related papers (2024-10-02T15:49:30Z) - QuantFactor REINFORCE: Mining Steady Formulaic Alpha Factors with Variance-bounded REINFORCE [5.560011325936085]
Powerful deep learning methods for alpha factor mining lack interpretability, making them unacceptable in the risk-sensitive real markets.<n>Formulaic alpha factors are preferred for their interpretability, while the search space is complex and powerful explorative methods are urged.<n>Recently, a promising framework is proposed for generating alpha factors using deep reinforcement learning.
arXiv Detail & Related papers (2024-09-08T15:57:58Z) - Directed Exploration in Reinforcement Learning from Linear Temporal Logic [59.707408697394534]
Linear temporal logic (LTL) is a powerful language for task specification in reinforcement learning.<n>We show that the synthesized reward signal remains fundamentally sparse, making exploration challenging.<n>We show how better exploration can be achieved by further leveraging the specification and casting its corresponding Limit Deterministic B"uchi Automaton (LDBA) as a Markov reward process.
arXiv Detail & Related papers (2024-08-18T14:25:44Z) - FactorLLM: Factorizing Knowledge via Mixture of Experts for Large Language Models [50.331708897857574]
We introduce FactorLLM, a novel approach that decomposes well-trained dense FFNs into sparse sub-networks without requiring any further modifications.
FactorLLM achieves comparable performance to the source model securing up to 85% model performance while obtaining over a 30% increase in inference speed.
arXiv Detail & Related papers (2024-08-15T16:45:16Z) - Reinforcement Learning from Diverse Human Preferences [68.4294547285359]
This paper develops a method for crowd-sourcing preference labels and learning from diverse human preferences.
The proposed method is tested on a variety of tasks in DMcontrol and Meta-world.
It has shown consistent and significant improvements over existing preference-based RL algorithms when learning from diverse feedback.
arXiv Detail & Related papers (2023-01-27T15:18:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.