AgentLite: A Lightweight Library for Building and Advancing
Task-Oriented LLM Agent System
- URL: http://arxiv.org/abs/2402.15538v1
- Date: Fri, 23 Feb 2024 06:25:20 GMT
- Title: AgentLite: A Lightweight Library for Building and Advancing
Task-Oriented LLM Agent System
- Authors: Zhiwei Liu, Weiran Yao, Jianguo Zhang, Liangwei Yang, Zuxin Liu,
Juntao Tan, Prafulla K. Choubey, Tian Lan, Jason Wu, Huan Wang, Shelby
Heinecke, Caiming Xiong, Silvio Savarese
- Abstract summary: We open-source a new AI agent library, AgentLite, which simplifies research investigation into LLM agents.
AgentLite is a task-oriented framework designed to enhance the ability of agents to break down tasks.
We introduce multiple practical applications developed with AgentLite to demonstrate its convenience and flexibility.
- Score: 91.41155892086252
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The booming success of LLMs initiates rapid development in LLM agents. Though
the foundation of an LLM agent is the generative model, it is critical to
devise the optimal reasoning strategies and agent architectures. Accordingly,
LLM agent research advances from the simple chain-of-thought prompting to more
complex ReAct and Reflection reasoning strategy; agent architecture also
evolves from single agent generation to multi-agent conversation, as well as
multi-LLM multi-agent group chat. However, with the existing intricate
frameworks and libraries, creating and evaluating new reasoning strategies and
agent architectures has become a complex challenge, which hinders research
investigation into LLM agents. Thus, we open-source a new AI agent library,
AgentLite, which simplifies this process by offering a lightweight,
user-friendly platform for innovating LLM agent reasoning, architectures, and
applications with ease. AgentLite is a task-oriented framework designed to
enhance the ability of agents to break down tasks and facilitate the
development of multi-agent systems. Furthermore, we introduce multiple
practical applications developed with AgentLite to demonstrate its convenience
and flexibility. Get started now at:
\url{https://github.com/SalesforceAIResearch/AgentLite}.
Related papers
- AgentSquare: Automatic LLM Agent Search in Modular Design Space [16.659969168343082]
Large Language Models (LLMs) have led to a rapid growth of agentic systems capable of handling a wide range of complex tasks.
We introduce a new research problem: Modularized LLM Agent Search (MoLAS)
arXiv Detail & Related papers (2024-10-08T15:52:42Z) - LLM-Agent-UMF: LLM-based Agent Unified Modeling Framework for Seamless Integration of Multi Active/Passive Core-Agents [0.0]
We propose a novel LLM-based Agent Unified Modeling Framework (LLM-Agent-UMF)
Our framework distinguishes between the different components of an LLM-based agent, setting LLMs and tools apart from a new element, the core-agent.
We evaluate our framework by applying it to thirteen state-of-the-art agents, thereby demonstrating its alignment with their functionalities.
arXiv Detail & Related papers (2024-09-17T17:54:17Z) - EvoAgent: Towards Automatic Multi-Agent Generation via Evolutionary Algorithms [55.77492625524141]
EvoAgent is a generic method to automatically extend expert agents to multi-agent systems via the evolutionary algorithm.
We show that EvoAgent can automatically generate multiple expert agents and significantly enhance the task-solving capabilities of LLM-based agents.
arXiv Detail & Related papers (2024-06-20T11:49:23Z) - Agent-Pro: Learning to Evolve via Policy-Level Reflection and Optimization [53.510942601223626]
Large Language Models (LLMs) exhibit robust problem-solving capabilities for diverse tasks.
These task solvers necessitate manually crafted prompts to inform task rules and regulate behaviors.
We propose Agent-Pro: an LLM-based Agent with Policy-level Reflection and Optimization.
arXiv Detail & Related papers (2024-02-27T15:09:20Z) - AgentTuning: Enabling Generalized Agent Abilities for LLMs [35.74502545364593]
We present AgentTuning, a simple and general method to enhance the agent abilities of open large language models.
We employ a hybrid instruction-tuning strategy by combining AgentInstruct with open-source instructions from general domains.
Our evaluations show that AgentTuning enables LLMs' agent capabilities without compromising general abilities.
arXiv Detail & Related papers (2023-10-19T15:19:53Z) - Formally Specifying the High-Level Behavior of LLM-Based Agents [24.645319505305316]
LLMs have emerged as promising tools for solving challenging problems without the need for task-specific finetuned models.
Currently, the design and implementation of such agents is ad hoc, as the wide variety of tasks that LLM-based agents may be applied to naturally means there can be no one-size-fits-all approach to agent design.
We propose a minimalistic generation framework that simplifies the process of building agents.
arXiv Detail & Related papers (2023-10-12T17:24:15Z) - Dynamic LLM-Agent Network: An LLM-agent Collaboration Framework with
Agent Team Optimization [59.39113350538332]
Large language model (LLM) agents have been shown effective on a wide range of tasks, and by ensembling multiple LLM agents, their performances could be further improved.
Existing approaches employ a fixed set of agents to interact with each other in a static architecture.
We build a framework named Dynamic LLM-Agent Network ($textbfDyLAN$) for LLM-agent collaboration on complicated tasks like reasoning and code generation.
arXiv Detail & Related papers (2023-10-03T16:05:48Z) - The Rise and Potential of Large Language Model Based Agents: A Survey [91.71061158000953]
Large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI)
We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents.
We explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation.
arXiv Detail & Related papers (2023-09-14T17:12:03Z) - AgentBench: Evaluating LLMs as Agents [88.45506148281379]
Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks.
We present AgentBench, a benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities.
arXiv Detail & Related papers (2023-08-07T16:08:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.