FinMem: A Performance-Enhanced LLM Trading Agent with Layered Memory and
Character Design
- URL: http://arxiv.org/abs/2311.13743v2
- Date: Sun, 3 Dec 2023 16:18:55 GMT
- Title: FinMem: A Performance-Enhanced LLM Trading Agent with Layered Memory and
Character Design
- Authors: Yangyang Yu, Haohang Li, Zhi Chen, Yuechen Jiang, Yang Li, Denghui
Zhang, Rong Liu, Jordan W. Suchow, Khaldoun Khashanah
- Abstract summary: textscFinMem is a novel LLM-based agent framework devised for financial decision-making.
textscFinMem's memory module aligns closely with the cognitive structure of human traders, offering robust interpretability.
This framework enables the agent to self-evolve its professional knowledge, react agilely to new investment cues, and continuously refine trading decisions.
- Score: 11.913409501633616
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advancements in Large Language Models (LLMs) have exhibited notable
efficacy in question-answering (QA) tasks across diverse domains. Their prowess
in integrating extensive web knowledge has fueled interest in developing
LLM-based autonomous agents. While LLMs are efficient in decoding human
instructions and deriving solutions by holistically processing historical
inputs, transitioning to purpose-driven agents requires a supplementary
rational architecture to process multi-source information, establish reasoning
chains, and prioritize critical tasks. Addressing this, we introduce
\textsc{FinMem}, a novel LLM-based agent framework devised for financial
decision-making. It encompasses three core modules: Profiling, to customize the
agent's characteristics; Memory, with layered message processing, to aid the
agent in assimilating hierarchical financial data; and Decision-making, to
convert insights gained from memories into investment decisions. Notably,
\textsc{FinMem}'s memory module aligns closely with the cognitive structure of
human traders, offering robust interpretability and real-time tuning. Its
adjustable cognitive span allows for the retention of critical information
beyond human perceptual limits, thereby enhancing trading outcomes. This
framework enables the agent to self-evolve its professional knowledge, react
agilely to new investment cues, and continuously refine trading decisions in
the volatile financial environment. We first compare \textsc{FinMem} with
various algorithmic agents on a scalable real-world financial dataset,
underscoring its leading trading performance in stocks. We then fine-tuned the
agent's perceptual span and character setting to achieve a significantly
enhanced trading performance. Collectively, \textsc{FinMem} presents a
cutting-edge LLM agent framework for automated trading, boosting cumulative
investment returns.
Related papers
- Scaling Autonomous Agents via Automatic Reward Modeling And Planning [52.39395405893965]
Large language models (LLMs) have demonstrated remarkable capabilities across a range of tasks.
However, they still struggle with problems requiring multi-step decision-making and environmental feedback.
We propose a framework that can automatically learn a reward model from the environment without human annotations.
arXiv Detail & Related papers (2025-02-17T18:49:25Z) - FLAG-Trader: Fusion LLM-Agent with Gradient-based Reinforcement Learning for Financial Trading [28.57263158928989]
Large language models (LLMs) fine-tuned on multimodal financial data have demonstrated impressive reasoning capabilities.
We propose textscFLAG-Trader, a unified architecture integrating linguistic processing (via LLMs) with gradient-driven reinforcement learning (RL) policy optimization.
arXiv Detail & Related papers (2025-02-17T04:45:53Z) - LLM-Lasso: A Robust Framework for Domain-Informed Feature Selection and Regularization [59.75242204923353]
We introduce LLM-Lasso, a framework that leverages large language models (LLMs) to guide feature selection in Lasso regression.
LLMs generate penalty factors for each feature, which are converted into weights for the Lasso penalty using a simple, tunable model.
Features identified as more relevant by the LLM receive lower penalties, increasing their likelihood of being retained in the final model.
arXiv Detail & Related papers (2025-02-15T02:55:22Z) - Hephaestus: Improving Fundamental Agent Capabilities of Large Language Models through Continual Pre-Training [69.13064064991552]
Hephaestus-Forge is a large-scale pre-training corpus designed to enhance the capabilities of LLM agents in API function calling, intrinsic reasoning and planning.
Hephaestus-Forge comprises 103B agent-specific data encompassing 76,537 APIs, including both tool documentation to introduce knowledge of API functions and function calling trajectories.
By continual pre-training on Hephaestus-Forge, Hephaestus outperforms small- to medium-scale open-source LLMs and rivals commercial LLMs on three agent benchmarks.
arXiv Detail & Related papers (2025-02-10T15:54:34Z) - TradingAgents: Multi-Agents LLM Financial Trading Framework [4.293484524693143]
TradingAgents proposes a novel stock trading framework inspired by trading firms.
It features LLM-powered agents in specialized roles such as fundamental analysts, sentiment analysts, technical analysts, and traders with varied risk profiles.
By simulating a dynamic, collaborative trading environment, this framework aims to improve trading performance.
arXiv Detail & Related papers (2024-12-28T12:54:06Z) - FinVision: A Multi-Agent Framework for Stock Market Prediction [0.0]
This research introduces a multi-modal multi-agent system designed specifically for financial trading tasks.
A key feature of our approach is the integration of a reflection module, which conducts analyses of historical trading signals and their outcomes.
arXiv Detail & Related papers (2024-10-29T06:02:28Z) - Optimizing Collaboration of LLM based Agents for Finite Element Analysis [1.5039745292757671]
This paper investigates the interactions between multiple agents within Large Language Models (LLMs) in the context of programming and coding tasks.
We utilize the AutoGen framework to facilitate communication among agents, evaluating different configurations based on the success rates from 40 random runs for each setup.
arXiv Detail & Related papers (2024-08-23T23:11:08Z) - Cognitive LLMs: Towards Integrating Cognitive Architectures and Large Language Models for Manufacturing Decision-making [51.737762570776006]
LLM-ACTR is a novel neuro-symbolic architecture that provides human-aligned and versatile decision-making.
Our framework extracts and embeds knowledge of ACT-R's internal decision-making process as latent neural representations.
Our experiments on novel Design for Manufacturing tasks show both improved task performance as well as improved grounded decision-making capability.
arXiv Detail & Related papers (2024-08-17T11:49:53Z) - FinCon: A Synthesized LLM Multi-Agent System with Conceptual Verbal Reinforcement for Enhanced Financial Decision Making [28.375203178500556]
Large language models (LLMs) have demonstrated notable potential in conducting complex tasks and are increasingly utilized in various financial applications.
Here, we introduce the FinCon, an LLM-based multi-agent framework with CONceptual verbal reinforcement tailored for diverse FINancial tasks.
A risk-control component in FinCon enhances decision quality by episodically initiating a self-critiquing mechanism to update systematic investment beliefs.
arXiv Detail & Related papers (2024-07-09T05:52:26Z) - Enhancing the General Agent Capabilities of Low-Parameter LLMs through Tuning and Multi-Branch Reasoning [56.82041895921434]
Open-source pre-trained Large Language Models (LLMs) exhibit strong language understanding and generation capabilities.
When used as agents for dealing with complex problems in the real world, their performance is far inferior to large commercial models such as ChatGPT and GPT-4.
arXiv Detail & Related papers (2024-03-29T03:48:12Z) - AgentBench: Evaluating LLMs as Agents [88.45506148281379]
Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks.
We present AgentBench, a benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities.
arXiv Detail & Related papers (2023-08-07T16:08:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.