Building LLM Agents by Incorporating Insights from Computer Systems
- URL: http://arxiv.org/abs/2504.04485v1
- Date: Sun, 06 Apr 2025 13:38:37 GMT
- Title: Building LLM Agents by Incorporating Insights from Computer Systems
- Authors: Yapeng Mi, Zhi Gao, Xiaojian Ma, Qing Li,
- Abstract summary: We advocate for building LLM agents by incorporating insights from computer systems.<n>Inspired by the von Neumann architecture, we propose a structured framework for LLM agentic systems.
- Score: 23.3629124649768
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: LLM-driven autonomous agents have emerged as a promising direction in recent years. However, many of these LLM agents are designed empirically or based on intuition, often lacking systematic design principles, which results in diverse agent structures with limited generality and scalability. In this paper, we advocate for building LLM agents by incorporating insights from computer systems. Inspired by the von Neumann architecture, we propose a structured framework for LLM agentic systems, emphasizing modular design and universal principles. Specifically, this paper first provides a comprehensive review of LLM agents from the computer system perspective, then identifies key challenges and future directions inspired by computer system design, and finally explores the learning mechanisms for LLM agents beyond the computer system. The insights gained from this comparative analysis offer a foundation for systematic LLM agent design and advancement.
Related papers
- A Survey on Trustworthy LLM Agents: Threats and Countermeasures [67.23228612512848]
Large Language Models (LLMs) and Multi-agent Systems (MAS) have significantly expanded the capabilities of LLM ecosystems.<n>We propose the TrustAgent framework, a comprehensive study on the trustworthiness of agents.
arXiv Detail & Related papers (2025-03-12T08:42:05Z) - APT: Architectural Planning and Text-to-Blueprint Construction Using Large Language Models for Open-World Agents [8.479128275067742]
We present an advanced Large Language Model (LLM)-driven framework that enables autonomous agents to construct complex structures in Minecraft.
By employing chain-of-thought decomposition along with multimodal inputs, the framework generates detailed architectural layouts and blueprints.
Our agent incorporates both memory and reflection modules to facilitate lifelong learning, adaptive refinement, and error correction throughout the building process.
arXiv Detail & Related papers (2024-11-26T09:31:28Z) - Specifications: The missing link to making the development of LLM systems an engineering discipline [65.10077876035417]
We discuss the progress the field has made so far-through advances like structured outputs, process supervision, and test-time compute.<n>We outline several future directions for research to enable the development of modular and reliable LLM-based systems.
arXiv Detail & Related papers (2024-11-25T07:48:31Z) - LLM-based Multi-Agent Systems: Techniques and Business Perspectives [26.74974842247119]
In the era of (multi-modal) large language models, most operational processes can be reformulated and reproduced using LLM agents.
As a natural trend of development, the tools for calling are becoming autonomous agents, thus the full intelligent system turns out to be a LLM-based Multi-Agent System (LaMAS)
Compared to the previous single-LLM-agent system, LaMAS has the advantages of i) dynamic task decomposition and organic specialization, ii) higher flexibility for system changing, and iv) feasibility of monetization for each entity.
arXiv Detail & Related papers (2024-11-21T11:36:29Z) - Configurable Foundation Models: Building LLMs from a Modular Perspective [115.63847606634268]
A growing tendency to decompose LLMs into numerous functional modules allows for inference with part of modules and dynamic assembly of modules to tackle complex tasks.
We coin the term brick to represent each functional module, designating the modularized structure as customizable foundation models.
We present four brick-oriented operations: retrieval and routing, merging, updating, and growing.
We find that the FFN layers follow modular patterns with functional specialization of neurons and functional neuron partitions.
arXiv Detail & Related papers (2024-09-04T17:01:02Z) - Large Language Model-Based Agents for Software Engineering: A Survey [20.258244647363544]
The recent advance in Large Language Models (LLMs) has shaped a new paradigm of AI agents, i.e., LLM-based agents.
We collect 106 papers and categorize them from two perspectives, i.e., the SE and agent perspectives.
In addition, we discuss open challenges and future directions in this critical domain.
arXiv Detail & Related papers (2024-09-04T15:59:41Z) - On the Design and Analysis of LLM-Based Algorithms [74.7126776018275]
Large language models (LLMs) are used as sub-routines in algorithms.
LLMs have achieved remarkable empirical success.
Our proposed framework holds promise for advancing LLM-based algorithms.
arXiv Detail & Related papers (2024-07-20T07:39:07Z) - A Survey on Self-Evolution of Large Language Models [116.54238664264928]
Large language models (LLMs) have significantly advanced in various fields and intelligent agent applications.
To address this issue, self-evolution approaches that enable LLMs to autonomously acquire, refine, and learn from experiences generated by the model itself are rapidly growing.
arXiv Detail & Related papers (2024-04-22T17:43:23Z) - AgentLite: A Lightweight Library for Building and Advancing
Task-Oriented LLM Agent System [91.41155892086252]
We open-source a new AI agent library, AgentLite, which simplifies research investigation into LLM agents.
AgentLite is a task-oriented framework designed to enhance the ability of agents to break down tasks.
We introduce multiple practical applications developed with AgentLite to demonstrate its convenience and flexibility.
arXiv Detail & Related papers (2024-02-23T06:25:20Z) - Risk Taxonomy, Mitigation, and Assessment Benchmarks of Large Language
Model Systems [29.828997665535336]
Large language models (LLMs) have strong capabilities in solving diverse natural language processing tasks.
However, the safety and security issues of LLM systems have become the major obstacle to their widespread application.
This paper proposes a comprehensive taxonomy, which systematically analyzes potential risks associated with each module of an LLM system.
arXiv Detail & Related papers (2024-01-11T09:29:56Z) - The Tyranny of Possibilities in the Design of Task-Oriented LLM Systems:
A Scoping Survey [1.0489539392650928]
The paper begins by defining a minimal task-oriented LLM system and exploring the design space of such systems.
We discuss a pattern in our results and formulate them into three conjectures.
In all, the scoping survey presents seven conjectures that can help guide future research efforts.
arXiv Detail & Related papers (2023-12-29T13:35:20Z) - Agent-OM: Leveraging LLM Agents for Ontology Matching [4.222245509121683]
This study introduces a novel agent-powered design paradigm for Ontology matching systems.<n>We propose a framework, namely Agent-OM (Agent for Ontology Matching), consisting of two Siamese agents for retrieval and matching.<n>Our system can achieve results very close to the long-standing best performance on simple OM tasks.
arXiv Detail & Related papers (2023-12-01T03:44:54Z) - Towards Responsible Generative AI: A Reference Architecture for Designing Foundation Model based Agents [28.406492378232695]
Foundation model based agents derive their autonomy from the capabilities of foundation models.
This paper presents a pattern-oriented reference architecture that serves as guidance when designing foundation model based agents.
arXiv Detail & Related papers (2023-11-22T04:21:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.