Automate Strategy Finding with LLM in Quant Investment
- URL: http://arxiv.org/abs/2409.06289v4
- Date: Mon, 03 Nov 2025 15:46:20 GMT
- Title: Automate Strategy Finding with LLM in Quant Investment
- Authors: Zhizhuo Kou, Holam Yu, Junyu Luo, Jingshu Peng, Xujia Li, Chengzhong Liu, Juntao Dai, Lei Chen, Sirui Han, Yike Guo,
- Abstract summary: We present a novel three-stage framework leveraging Large Language Models (LLMs) within a risk-aware multi-agent system for automate strategy finding in quantitative finance.<n>Our approach addresses the brittleness of traditional deep learning models in financial applications by: employing prompt-engineered LLMs to generate executable alpha factor candidates across diverse financial data.<n> Experimental results demonstrate the robust performance of the strategy in Chinese & US market regimes compared to established benchmarks.
- Score: 32.74265532529821
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a novel three-stage framework leveraging Large Language Models (LLMs) within a risk-aware multi-agent system for automate strategy finding in quantitative finance. Our approach addresses the brittleness of traditional deep learning models in financial applications by: employing prompt-engineered LLMs to generate executable alpha factor candidates across diverse financial data, implementing multimodal agent-based evaluation that filters factors based on market status, predictive quality while maintaining category balance, and deploying dynamic weight optimization that adapts to market conditions. Experimental results demonstrate the robust performance of the strategy in Chinese & US market regimes compared to established benchmarks. Our work extends LLMs capabilities to quantitative trading, providing a scalable architecture for financial signal extraction and portfolio construction. The overall framework significantly outperforms all benchmarks with 53.17% cumulative return on SSE50 (Jan 2023 to Jan 2024), demonstrating superior risk-adjusted performance and downside protection on the market.
Related papers
- Generative AI-enhanced Sector-based Investment Portfolio Construction [12.174346896225153]
This paper investigates how Large Language Models (LLMs) can be applied to quantitative sector-based portfolio construction.<n>We use LLMs to identify investable universes of stocks within S&P 500 sector indices.<n>We evaluate how their selections perform when combined with classical portfolio optimization methods.
arXiv Detail & Related papers (2025-12-31T00:19:41Z) - LAET: A Layer-wise Adaptive Ensemble Tuning Framework for Pretrained Language Models [7.216206616406649]
Large language models (LLMs) like BloombergGPT and FinMA have set new benchmarks across various financial NLP tasks.<n>We propose Layer-wise Adaptive Ensemble Tuning (LAET), a novel strategy that selectively fine-tunes the most effective layers of pre-trained LLMs.<n>Our approach shows strong results in financial NLP tasks, outperforming existing benchmarks and state-of-the-art LLMs.
arXiv Detail & Related papers (2025-11-14T13:57:46Z) - Hierarchical AI Multi-Agent Fundamental Investing: Evidence from China's A-Share Market [8.11097322686573]
We present a multi-agent, AI-driven framework for fundamental investing that integrates macro indicators, industry-level and firm-specific information to construct optimized equity portfolios.<n>We evaluate the system on the constituents by the CSI 300 Index of China's A-share market and find that it consistently outperforms standard benchmarks.
arXiv Detail & Related papers (2025-10-24T04:38:37Z) - Trade in Minutes! Rationality-Driven Agentic System for Quantitative Financial Trading [57.28635022507172]
TiMi is a rationality-driven multi-agent system that architecturally decouples strategy development from minute-level deployment.<n>We propose a two-tier analytical paradigm from macro patterns to micro customization, layered programming design for trading bot implementation, and closed-loop optimization driven by mathematical reflection.
arXiv Detail & Related papers (2025-10-06T13:08:55Z) - StockBench: Can LLM Agents Trade Stocks Profitably In Real-world Markets? [44.10622904101254]
Large language models (LLMs) have recently demonstrated strong capabilities as autonomous agents.<n>We introduce StockBench, a benchmark designed to evaluate LLM agents in realistic, multi-month stock trading environments.<n>Our evaluation shows that while most LLM agents struggle to outperform the simple buy-and-hold baseline, several models demonstrate the potential to deliver higher returns and manage risk more effectively.
arXiv Detail & Related papers (2025-10-02T16:54:57Z) - FinMarBa: A Market-Informed Dataset for Financial Sentiment Classification [0.0]
This paper presents a novel hierarchical framework for portfolio optimization, integrating lightweight Large Language Models (LLMs) with Deep Reinforcement Learning (DRL)<n>Our three-tier architecture employs base RL agents to process hybrid data, meta-agents to aggregate their decisions, and a super-agent to merge decisions based on market data and sentiment analysis.<n>The framework achieves a 26% annualized return and a Sharpe ratio of 1.2, outperforming equal-weighted and S&P 500 benchmarks.
arXiv Detail & Related papers (2025-07-24T16:27:32Z) - Building crypto portfolios with agentic AI [46.348283638884425]
The rapid growth of crypto markets has opened new opportunities for investors, but at the same time exposed them to high volatility.<n>This paper presents a practical application of a multi-agent system designed to autonomously construct and evaluate crypto-asset allocations.
arXiv Detail & Related papers (2025-07-11T18:03:51Z) - Can LLM-based Financial Investing Strategies Outperform the Market in Long Run? [5.968528974532717]
Large Language Models (LLMs) have been leveraged for asset pricing tasks and stock trading applications, enabling AI agents to generate investment decisions from unstructured financial data.<n>We critically assess their generalizability and robustness by proposing FINSABER, a backtesting framework evaluating timing-based strategies across longer periods and a larger universe of symbols.
arXiv Detail & Related papers (2025-05-11T18:02:21Z) - Cross-Asset Risk Management: Integrating LLMs for Real-Time Monitoring of Equity, Fixed Income, and Currency Markets [30.815524322885754]
Large language models (LLMs) have emerged as powerful tools in the field of finance.<n>We introduce a Cross-Asset Risk Management framework that utilizes LLMs to facilitate real-time monitoring of equity, fixed income, and currency markets.
arXiv Detail & Related papers (2025-04-05T22:28:35Z) - Do We Truly Need So Many Samples? Multi-LLM Repeated Sampling Efficiently Scales Test-Time Compute [54.22256089592864]
This paper presents a simple, effective, and cost-efficient strategy to improve LLM performance by scaling test-time compute.<n>Our strategy builds upon the repeated-sampling-then-voting framework, with a novel twist: incorporating multiple models, even weaker ones, to leverage their complementary strengths.
arXiv Detail & Related papers (2025-04-01T13:13:43Z) - Collab: Controlled Decoding using Mixture of Agents for LLM Alignment [90.6117569025754]
Reinforcement learning from human feedback has emerged as an effective technique to align Large Language models.
Controlled Decoding provides a mechanism for aligning a model at inference time without retraining.
We propose a mixture of agent-based decoding strategies leveraging the existing off-the-shelf aligned LLM policies.
arXiv Detail & Related papers (2025-03-27T17:34:25Z) - Benchmarking Post-Training Quantization in LLMs: Comprehensive Taxonomy, Unified Evaluation, and Comparative Analysis [89.60263788590893]
Post-training Quantization (PTQ) technique has been extensively adopted for large language models (LLMs) compression.<n>Existing algorithms focus primarily on performance, overlooking the trade-off among model size, performance, and quantization bitwidth.<n>We provide a novel benchmark for LLMs PTQ in this paper.
arXiv Detail & Related papers (2025-02-18T07:35:35Z) - FLAG-Trader: Fusion LLM-Agent with Gradient-based Reinforcement Learning for Financial Trading [28.57263158928989]
Large language models (LLMs) fine-tuned on multimodal financial data have demonstrated impressive reasoning capabilities.
We propose textscFLAG-Trader, a unified architecture integrating linguistic processing (via LLMs) with gradient-driven reinforcement learning (RL) policy optimization.
arXiv Detail & Related papers (2025-02-17T04:45:53Z) - HedgeAgents: A Balanced-aware Multi-agent Financial Trading System [20.48571388047213]
Large Language Models (LLMs) and Agent-based models exhibit promising potential in real-time market analysis and trading decisions.<n>They still experience a significant -20% loss when confronted with rapid declines or frequent fluctuations.<n>This paper introduces an innovative multi-agent system, HedgeAgents, aimed at bolstering system via hedging robustness'' strategies.
arXiv Detail & Related papers (2025-02-17T04:13:19Z) - TradingAgents: Multi-Agents LLM Financial Trading Framework [4.293484524693143]
TradingAgents proposes a novel stock trading framework inspired by trading firms.
It features LLM-powered agents in specialized roles such as fundamental analysts, sentiment analysts, technical analysts, and traders with varied risk profiles.
By simulating a dynamic, collaborative trading environment, this framework aims to improve trading performance.
arXiv Detail & Related papers (2024-12-28T12:54:06Z) - INVESTORBENCH: A Benchmark for Financial Decision-Making Tasks with LLM-based Agent [15.562784986263654]
InvestorBench is a benchmark for evaluating large language model (LLM)-based agents in financial decision-making contexts.
It provides a comprehensive suite of tasks applicable to different financial products, including single equities like stocks, cryptocurrencies and exchange-traded funds (ETFs)
We also assess the reasoning and decision-making capabilities of our agent framework using thirteen different LLMs as backbone models.
arXiv Detail & Related papers (2024-12-24T05:22:33Z) - BreakGPT: Leveraging Large Language Models for Predicting Asset Price Surges [55.2480439325792]
This paper introduces BreakGPT, a novel large language model (LLM) architecture adapted specifically for time series forecasting and the prediction of sharp upward movements in asset prices.
We showcase BreakGPT as a promising solution for financial forecasting with minimal training and as a strong competitor for capturing both local and global temporal dependencies.
arXiv Detail & Related papers (2024-11-09T05:40:32Z) - From Novice to Expert: LLM Agent Policy Optimization via Step-wise Reinforcement Learning [62.54484062185869]
We introduce StepAgent, which utilizes step-wise reward to optimize the agent's reinforcement learning process.
We propose implicit-reward and inverse reinforcement learning techniques to facilitate agent reflection and policy adjustment.
arXiv Detail & Related papers (2024-11-06T10:35:11Z) - FinVision: A Multi-Agent Framework for Stock Market Prediction [0.0]
This research introduces a multi-modal multi-agent system designed specifically for financial trading tasks.
A key feature of our approach is the integration of a reflection module, which conducts analyses of historical trading signals and their outcomes.
arXiv Detail & Related papers (2024-10-29T06:02:28Z) - When AI Meets Finance (StockAgent): Large Language Model-based Stock Trading in Simulated Real-world Environments [55.19252983108372]
We have developed a multi-agent AI system called StockAgent, driven by LLMs.
The StockAgent allows users to evaluate the impact of different external factors on investor trading.
It avoids the test set leakage issue present in existing trading simulation systems based on AI Agents.
arXiv Detail & Related papers (2024-07-15T06:49:30Z) - Uni-MoE: Scaling Unified Multimodal LLMs with Mixture of Experts [54.529880848937104]
We develop a unified MLLM with the MoE architecture, named Uni-MoE, that can handle a wide array of modalities.
Specifically, it features modality-specific encoders with connectors for a unified multimodal representation.
We evaluate the instruction-tuned Uni-MoE on a comprehensive set of multimodal datasets.
arXiv Detail & Related papers (2024-05-18T12:16:01Z) - Model Composition for Multimodal Large Language Models [71.5729418523411]
We propose a new paradigm through the model composition of existing MLLMs to create a new model that retains the modal understanding capabilities of each original model.
Our basic implementation, NaiveMC, demonstrates the effectiveness of this paradigm by reusing modality encoders and merging LLM parameters.
arXiv Detail & Related papers (2024-02-20T06:38:10Z) - Developing A Multi-Agent and Self-Adaptive Framework with Deep Reinforcement Learning for Dynamic Portfolio Risk Management [1.2016264781280588]
A multi-agent reinforcement learning (RL) approach is proposed to balance the trade-off between the overall portfolio returns and their potential risks.
The obtained empirical results clearly reveal the potential strengths of our proposed MASA framework.
arXiv Detail & Related papers (2024-02-01T11:31:26Z) - Integrating Stock Features and Global Information via Large Language
Models for Enhanced Stock Return Prediction [5.762650600435391]
We propose a novel framework consisting of two components to surmount the challenges of integrating Large Language Models with existing quantitative models.
We have demonstrated superior performance in Rank Information Coefficient and returns, particularly compared to models relying only on stock features in the China A-share market.
arXiv Detail & Related papers (2023-10-09T11:34:18Z) - IMM: An Imitative Reinforcement Learning Approach with Predictive
Representation Learning for Automatic Market Making [33.23156884634365]
Reinforcement Learning technology has achieved remarkable success in quantitative trading.
Most existing RL-based market making methods focus on optimizing single-price level strategies.
We propose Imitative Market Maker (IMM), a novel RL framework leveraging both knowledge from suboptimal signal-based experts and direct policy interactions.
arXiv Detail & Related papers (2023-08-17T11:04:09Z) - LAMM: Language-Assisted Multi-Modal Instruction-Tuning Dataset,
Framework, and Benchmark [81.42376626294812]
We present Language-Assisted Multi-Modal instruction tuning dataset, framework, and benchmark.
Our aim is to establish LAMM as a growing ecosystem for training and evaluating MLLMs.
We present a comprehensive dataset and benchmark, which cover a wide range of vision tasks for 2D and 3D vision.
arXiv Detail & Related papers (2023-06-11T14:01:17Z) - PIXIU: A Large Language Model, Instruction Data and Evaluation Benchmark
for Finance [63.51545277822702]
PIXIU is a comprehensive framework including the first financial large language model (LLMs) based on fine-tuning LLaMA with instruction data.
We propose FinMA by fine-tuning LLaMA with the constructed dataset to be able to follow instructions for various financial tasks.
We conduct a detailed analysis of FinMA and several existing LLMs, uncovering their strengths and weaknesses in handling critical financial tasks.
arXiv Detail & Related papers (2023-06-08T14:20:29Z) - Can ChatGPT Forecast Stock Price Movements? Return Predictability and Large Language Models [51.3422222472898]
We document the capability of large language models (LLMs) like ChatGPT to predict stock price movements using news headlines.
We develop a theoretical model incorporating information capacity constraints, underreaction, limits-to-arbitrage, and LLMs.
arXiv Detail & Related papers (2023-04-15T19:22:37Z) - Factor Investing with a Deep Multi-Factor Model [123.52358449455231]
We develop a novel deep multi-factor model that adopts industry neutralization and market neutralization modules with clear financial insights.
Tests on real-world stock market data demonstrate the effectiveness of our deep multi-factor model.
arXiv Detail & Related papers (2022-10-22T14:47:11Z) - Quantitative Stock Investment by Routing Uncertainty-Aware Trading
Experts: A Multi-Task Learning Approach [29.706515133374193]
We show that existing deep learning methods are sensitive to random seeds and network routers.
We propose a novel two-stage mixture-of-experts (MoE) framework for quantitative investment to mimic the efficient bottom-up trading strategy design workflow of successful trading firms.
AlphaMix significantly outperforms many state-of-the-art baselines in terms of four financial criteria.
arXiv Detail & Related papers (2022-06-07T08:58:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.