AlphaEvolve: A Learning Framework to Discover Novel Alphas in
  Quantitative Investment
        - URL: http://arxiv.org/abs/2103.16196v2
 - Date: Thu, 1 Apr 2021 10:35:19 GMT
 - Title: AlphaEvolve: A Learning Framework to Discover Novel Alphas in
  Quantitative Investment
 - Authors: Can Cui, Wei Wang, Meihui Zhang, Gang Chen, Zhaojing Luo, Beng Chin
  Ooi
 - Abstract summary: We introduce a new class of alphas to model scalar, vector, and matrix features.
The new alphas predict returns with high accuracy and can be mined into a weakly correlated set.
We propose a novel alpha mining framework based on AutoML, called AlphaEvolve, to generate the new alphas.
 - Score: 16.27557073668891
 - License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
 - Abstract:   Alphas are stock prediction models capturing trading signals in a stock
market. A set of effective alphas can generate weakly correlated high returns
to diversify the risk. Existing alphas can be categorized into two classes:
Formulaic alphas are simple algebraic expressions of scalar features, and thus
can generalize well and be mined into a weakly correlated set. Machine learning
alphas are data-driven models over vector and matrix features. They are more
predictive than formulaic alphas, but are too complex to mine into a weakly
correlated set. In this paper, we introduce a new class of alphas to model
scalar, vector, and matrix features which possess the strengths of these two
existing classes. The new alphas predict returns with high accuracy and can be
mined into a weakly correlated set. In addition, we propose a novel alpha
mining framework based on AutoML, called AlphaEvolve, to generate the new
alphas. To this end, we first propose operators for generating the new alphas
and selectively injecting relational domain knowledge to model the relations
between stocks. We then accelerate the alpha mining by proposing a pruning
technique for redundant alphas. Experiments show that AlphaEvolve can evolve
initial alphas into the new alphas with high returns and weak correlations.
 
       
      
        Related papers
        - AlphaEvolve: A coding agent for scientific and algorithmic discovery [63.13852052551106]
We present AlphaEvolve, an evolutionary coding agent that substantially enhances capabilities of state-of-the-art LLMs.<n>AlphaEvolve orchestrates an autonomous pipeline of LLMs, whose task is to improve an algorithm by making direct changes to the code.<n>We demonstrate the broad applicability of this approach by applying it to a number of important computational problems.
arXiv  Detail & Related papers  (2025-06-16T06:37:18Z) - AlphaAgent: LLM-Driven Alpha Mining with Regularized Exploration to   Counteract Alpha Decay [43.50447460231601]
We propose AlphaAgent, an autonomous framework that integrates Large Language Models with ad hoc regularizations for mining decay-resistant alpha factors.
AlphaAgent consistently delivers significant alpha in Chinese CSI 500 and US S&P 500 markets over the past four years.
 Notably, AlphaAgent showcases remarkable resistance to alpha decay, elevating the potential for yielding powerful factors.
arXiv  Detail & Related papers  (2025-02-24T02:56:46Z) - Improving AlphaFlow for Efficient Protein Ensembles Generation [64.10918970280603]
We propose a feature-conditioned generative model called AlphaFlow-Lit to realize efficient protein ensembles generation.
AlphaFlow-Lit performs on-par with AlphaFlow and surpasses its distilled version without pretraining, all while achieving a significant sampling acceleration of around 47 times.
arXiv  Detail & Related papers  (2024-07-08T13:36:43Z) - AlphaForge: A Framework to Mine and Dynamically Combine Formulaic Alpha   Factors [14.80394452270726]
This paper proposes a two-stage alpha generating framework AlphaForge, for alpha factor mining and factor combination.
 Experiments conducted on real-world datasets demonstrate that our proposed model outperforms contemporary benchmarks in formulaic alpha factor mining.
arXiv  Detail & Related papers  (2024-06-26T14:34:37Z) - $\text{Alpha}^2$: Discovering Logical Formulaic Alphas using Deep   Reinforcement Learning [28.491587815128575]
We propose a novel framework for alpha discovery using deep reinforcement learning (DRL)
A search algorithm guided by DRL navigates through the search space based on value estimates for potential alpha outcomes.
 Empirical experiments on real-world stock markets demonstrates $textAlpha2$'s capability to identify a diverse set of logical and effective alphas.
arXiv  Detail & Related papers  (2024-06-24T10:21:29Z) - Alpha-GPT: Human-AI Interactive Alpha Mining for Quantitative Investment [9.424699345940725]
We propose a new alpha mining paradigm by introducing human-AI interaction.
We also develop Alpha-GPT, a new interactive alpha mining system framework.
arXiv  Detail & Related papers  (2023-07-31T16:40:06Z) - Generating Synergistic Formulaic Alpha Collections via Reinforcement
  Learning [20.589583396095225]
We propose a new alpha-mining framework that prioritizes mining a synergistic set of alphas.
We show that our framework is able to achieve higher returns compared to previous approaches.
arXiv  Detail & Related papers  (2023-05-25T13:41:07Z) - Beating the Best: Improving on AlphaFold2 at Protein Structure
  Prediction [1.3124513975412255]
ARStack significantly outperforms AlphaFold2 and RoseTTAFold.
We rigorously demonstrate this using two sets of non-homologous proteins, and a test set of protein structures published after that of AlphaFold2 and RoseTTAFold.
arXiv  Detail & Related papers  (2023-01-18T14:39:34Z) - I2D2: Inductive Knowledge Distillation with NeuroLogic and
  Self-Imitation [89.38161262164586]
We study generative models of commonsense knowledge, focusing on the task of generating generics.
We introduce I2D2, a novel commonsense distillation framework that loosely follows the Symbolic Knowledge Distillation of West et al.
Our study leads to a new corpus of generics, Gen-A-tomic, that is the largest and highest quality available to date.
arXiv  Detail & Related papers  (2022-12-19T04:47:49Z) - What learning algorithm is in-context learning? Investigations with
  linear models [87.91612418166464]
We investigate the hypothesis that transformer-based in-context learners implement standard learning algorithms implicitly.
We show that trained in-context learners closely match the predictors computed by gradient descent, ridge regression, and exact least-squares regression.
Preliminary evidence that in-context learners share algorithmic features with these predictors.
arXiv  Detail & Related papers  (2022-11-28T18:59:51Z) - Bridging Maximum Likelihood and Adversarial Learning via
  $\alpha$-Divergence [78.26304241440113]
We propose an $alpha$-Bridge to unify the advantages of ML and adversarial learning.
We reveal that generalizations of the $alpha$-Bridge are closely related to approaches developed recently to regularize adversarial learning.
arXiv  Detail & Related papers  (2020-07-13T04:06:43Z) - Reinforcement Learning with Augmented Data [97.42819506719191]
We present Reinforcement Learning with Augmented Data (RAD), a simple plug-and-play module that can enhance most RL algorithms.
We show that augmentations such as random translate, crop, color jitter, patch cutout, random convolutions, and amplitude scale can enable simple RL algorithms to outperform complex state-of-the-art methods.
arXiv  Detail & Related papers  (2020-04-30T17:35:32Z) - AutoML-Zero: Evolving Machine Learning Algorithms From Scratch [76.83052807776276]
We show that it is possible to automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks.
We demonstrate this by introducing a novel framework that significantly reduces human bias through a generic search space.
We believe these preliminary successes in discovering machine learning algorithms from scratch indicate a promising new direction in the field.
arXiv  Detail & Related papers  (2020-03-06T19:00:04Z) 
        This list is automatically generated from the titles and abstracts of the papers in this site.
       
     
           This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.