QuantAgent: Seeking Holy Grail in Trading by Self-Improving Large
Language Model
- URL: http://arxiv.org/abs/2402.03755v1
- Date: Tue, 6 Feb 2024 06:47:14 GMT
- Title: QuantAgent: Seeking Holy Grail in Trading by Self-Improving Large
Language Model
- Authors: Saizhuo Wang, Hang Yuan, Lionel M. Ni, Jian Guo
- Abstract summary: This paper introduces a principled framework to address the core challenge of efficiently building and integrating a domain-specific knowledge base.
In the inner loop, the agent refines its responses by drawing from its knowledge base, while in the outer loop, these responses are tested in real-world scenarios.
We instantiate this framework through an autonomous agent for mining trading signals named QuantAgent.
- Score: 14.800710112671226
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autonomous agents based on Large Language Models (LLMs) that devise plans and
tackle real-world challenges have gained prominence.However, tailoring these
agents for specialized domains like quantitative investment remains a
formidable task. The core challenge involves efficiently building and
integrating a domain-specific knowledge base for the agent's learning process.
This paper introduces a principled framework to address this challenge,
comprising a two-layer loop.In the inner loop, the agent refines its responses
by drawing from its knowledge base, while in the outer loop, these responses
are tested in real-world scenarios to automatically enhance the knowledge base
with new insights.We demonstrate that our approach enables the agent to
progressively approximate optimal behavior with provable
efficiency.Furthermore, we instantiate this framework through an autonomous
agent for mining trading signals named QuantAgent. Empirical results showcase
QuantAgent's capability in uncovering viable financial signals and enhancing
the accuracy of financial forecasts.
Related papers
- WESE: Weak Exploration to Strong Exploitation for LLM Agents [95.6720931773781]
This paper proposes a novel approach, Weak Exploration to Strong Exploitation (WESE) to enhance LLM agents in solving open-world interactive tasks.
WESE involves decoupling the exploration and exploitation process, employing a cost-effective weak agent to perform exploration tasks for global knowledge.
A knowledge graph-based strategy is then introduced to store the acquired knowledge and extract task-relevant knowledge, enhancing the stronger agent in success rate and efficiency for the exploitation task.
arXiv Detail & Related papers (2024-04-11T03:31:54Z) - AgentBoard: An Analytical Evaluation Board of Multi-turn LLM Agents [76.95062553043607]
evaluating large language models (LLMs) is essential for understanding their capabilities and facilitating their integration into practical applications.
We introduce AgentBoard, a pioneering comprehensive benchmark and accompanied open-source evaluation framework tailored to analytical evaluation of LLM agents.
arXiv Detail & Related papers (2024-01-24T01:51:00Z) - FinMem: A Performance-Enhanced LLM Trading Agent with Layered Memory and
Character Design [11.913409501633616]
textscFinMem is a novel LLM-based agent framework devised for financial decision-making.
textscFinMem's memory module aligns closely with the cognitive structure of human traders, offering robust interpretability.
This framework enables the agent to self-evolve its professional knowledge, react agilely to new investment cues, and continuously refine trading decisions.
arXiv Detail & Related papers (2023-11-23T00:24:40Z) - FinGPT: Instruction Tuning Benchmark for Open-Source Large Language
Models in Financial Datasets [9.714447724811842]
This paper introduces a distinctive approach anchored in the Instruction Tuning paradigm for open-source large language models.
We capitalize on the interoperability of open-source models, ensuring a seamless and transparent integration.
The paper presents a benchmarking scheme designed for end-to-end training and testing, employing a cost-effective progression.
arXiv Detail & Related papers (2023-10-07T12:52:58Z) - An In-depth Survey of Large Language Model-based Artificial Intelligence
Agents [11.774961923192478]
We have explored the core differences and characteristics between LLM-based AI agents and traditional AI agents.
We conducted an in-depth analysis of the key components of AI agents, including planning, memory, and tool use.
arXiv Detail & Related papers (2023-09-23T11:25:45Z) - The Rise and Potential of Large Language Model Based Agents: A Survey [91.71061158000953]
Large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI)
We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents.
We explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation.
arXiv Detail & Related papers (2023-09-14T17:12:03Z) - ExpeL: LLM Agents Are Experiential Learners [60.54312035818746]
We introduce the Experiential Learning (ExpeL) agent to allow learning from agent experiences without requiring parametric updates.
Our agent autonomously gathers experiences and extracts knowledge using natural language from a collection of training tasks.
At inference, the agent recalls its extracted insights and past experiences to make informed decisions.
arXiv Detail & Related papers (2023-08-20T03:03:34Z) - Estimating and Incentivizing Imperfect-Knowledge Agents with Hidden
Rewards [4.742123770879715]
In practice, incentive providers often cannot observe the reward realizations of incentivized agents.
This paper explores a repeated adverse selection game between a self-interested learning agent and a learning principal.
We introduce an estimator whose only input is the history of principal's incentives and agent's choices.
arXiv Detail & Related papers (2023-08-13T08:12:01Z) - Enhancing Trust in LLM-Based AI Automation Agents: New Considerations
and Future Challenges [2.6212127510234797]
In the field of process automation, a new generation of AI-based agents has emerged, enabling the execution of complex tasks.
This paper analyzes the main aspects of trust in AI agents discussed in existing literature, and identifies specific considerations and challenges relevant to this new generation of automation agents.
arXiv Detail & Related papers (2023-08-10T07:12:11Z) - Reinforcement Learning with Efficient Active Feature Acquisition [59.91808801541007]
In real-life, information acquisition might correspond to performing a medical test on a patient.
We propose a model-based reinforcement learning framework that learns an active feature acquisition policy.
Key to the success is a novel sequential variational auto-encoder that learns high-quality representations from partially observed states.
arXiv Detail & Related papers (2020-11-02T08:46:27Z) - Randomized Entity-wise Factorization for Multi-Agent Reinforcement
Learning [59.62721526353915]
Multi-agent settings in the real world often involve tasks with varying types and quantities of agents and non-agent entities.
Our method aims to leverage these commonalities by asking the question: What is the expected utility of each agent when only considering a randomly selected sub-group of its observed entities?''
arXiv Detail & Related papers (2020-06-07T18:28:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.