QuantAgent: Seeking Holy Grail in Trading by Self-Improving Large
Language Model
- URL: http://arxiv.org/abs/2402.03755v1
- Date: Tue, 6 Feb 2024 06:47:14 GMT
- Title: QuantAgent: Seeking Holy Grail in Trading by Self-Improving Large
Language Model
- Authors: Saizhuo Wang, Hang Yuan, Lionel M. Ni, Jian Guo
- Abstract summary: This paper introduces a principled framework to address the core challenge of efficiently building and integrating a domain-specific knowledge base.
In the inner loop, the agent refines its responses by drawing from its knowledge base, while in the outer loop, these responses are tested in real-world scenarios.
We instantiate this framework through an autonomous agent for mining trading signals named QuantAgent.
- Score: 14.800710112671226
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autonomous agents based on Large Language Models (LLMs) that devise plans and
tackle real-world challenges have gained prominence.However, tailoring these
agents for specialized domains like quantitative investment remains a
formidable task. The core challenge involves efficiently building and
integrating a domain-specific knowledge base for the agent's learning process.
This paper introduces a principled framework to address this challenge,
comprising a two-layer loop.In the inner loop, the agent refines its responses
by drawing from its knowledge base, while in the outer loop, these responses
are tested in real-world scenarios to automatically enhance the knowledge base
with new insights.We demonstrate that our approach enables the agent to
progressively approximate optimal behavior with provable
efficiency.Furthermore, we instantiate this framework through an autonomous
agent for mining trading signals named QuantAgent. Empirical results showcase
QuantAgent's capability in uncovering viable financial signals and enhancing
the accuracy of financial forecasts.
Related papers
- From Novice to Expert: LLM Agent Policy Optimization via Step-wise Reinforcement Learning [62.54484062185869]
We introduce StepAgent, which utilizes step-wise reward to optimize the agent's reinforcement learning process.
We propose implicit-reward and inverse reinforcement learning techniques to facilitate agent reflection and policy adjustment.
arXiv Detail & Related papers (2024-11-06T10:35:11Z) - Proactive Agent: Shifting LLM Agents from Reactive Responses to Active Assistance [95.03771007780976]
We tackle the challenge of developing proactive agents capable of anticipating and initiating tasks without explicit human instructions.
First, we collect real-world human activities to generate proactive task predictions.
These predictions are labeled by human annotators as either accepted or rejected.
The labeled data is used to train a reward model that simulates human judgment.
arXiv Detail & Related papers (2024-10-16T08:24:09Z) - Gödel Agent: A Self-Referential Agent Framework for Recursive Self-Improvement [117.94654815220404]
G"odel Agent is a self-evolving framework inspired by the G"odel machine.
G"odel Agent can achieve continuous self-improvement, surpassing manually crafted agents in performance, efficiency, and generalizability.
arXiv Detail & Related papers (2024-10-06T10:49:40Z) - DANA: Domain-Aware Neurosymbolic Agents for Consistency and Accuracy [3.2354860243748873]
Large Language Models (LLMs) have shown remarkable capabilities, but their inherent probabilistic nature often leads to inconsistency and inaccuracy in complex problem-solving tasks.
This paper introduces DANA, an architecture that addresses these issues by integrating domain-specific knowledge with neurosymbolic approaches.
arXiv Detail & Related papers (2024-09-27T18:29:23Z) - Textualized Agent-Style Reasoning for Complex Tasks by Multiple Round LLM Generation [49.27250832754313]
We present AgentCOT, a llm-based autonomous agent framework.
At each step, AgentCOT selects an action and executes it to yield an intermediate result with supporting evidence.
We introduce two new strategies to enhance the performance of AgentCOT.
arXiv Detail & Related papers (2024-09-19T02:20:06Z) - WESE: Weak Exploration to Strong Exploitation for LLM Agents [95.6720931773781]
This paper proposes a novel approach, Weak Exploration to Strong Exploitation (WESE) to enhance LLM agents in solving open-world interactive tasks.
WESE involves decoupling the exploration and exploitation process, employing a cost-effective weak agent to perform exploration tasks for global knowledge.
A knowledge graph-based strategy is then introduced to store the acquired knowledge and extract task-relevant knowledge, enhancing the stronger agent in success rate and efficiency for the exploitation task.
arXiv Detail & Related papers (2024-04-11T03:31:54Z) - AgentBoard: An Analytical Evaluation Board of Multi-turn LLM Agents [76.95062553043607]
evaluating large language models (LLMs) is essential for understanding their capabilities and facilitating their integration into practical applications.
We introduce AgentBoard, a pioneering comprehensive benchmark and accompanied open-source evaluation framework tailored to analytical evaluation of LLM agents.
arXiv Detail & Related papers (2024-01-24T01:51:00Z) - FinMem: A Performance-Enhanced LLM Trading Agent with Layered Memory and
Character Design [11.913409501633616]
textscFinMem is a novel LLM-based agent framework devised for financial decision-making.
textscFinMem's memory module aligns closely with the cognitive structure of human traders, offering robust interpretability.
This framework enables the agent to self-evolve its professional knowledge, react agilely to new investment cues, and continuously refine trading decisions.
arXiv Detail & Related papers (2023-11-23T00:24:40Z) - FinGPT: Instruction Tuning Benchmark for Open-Source Large Language
Models in Financial Datasets [9.714447724811842]
This paper introduces a distinctive approach anchored in the Instruction Tuning paradigm for open-source large language models.
We capitalize on the interoperability of open-source models, ensuring a seamless and transparent integration.
The paper presents a benchmarking scheme designed for end-to-end training and testing, employing a cost-effective progression.
arXiv Detail & Related papers (2023-10-07T12:52:58Z) - Estimating and Incentivizing Imperfect-Knowledge Agents with Hidden
Rewards [4.742123770879715]
In practice, incentive providers often cannot observe the reward realizations of incentivized agents.
This paper explores a repeated adverse selection game between a self-interested learning agent and a learning principal.
We introduce an estimator whose only input is the history of principal's incentives and agent's choices.
arXiv Detail & Related papers (2023-08-13T08:12:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.