DynaSaur: Large Language Agents Beyond Predefined Actions
- URL: http://arxiv.org/abs/2411.01747v2
- Date: Wed, 04 Jun 2025 16:26:58 GMT
- Title: DynaSaur: Large Language Agents Beyond Predefined Actions
- Authors: Dang Nguyen, Viet Dac Lai, Seunghyun Yoon, Ryan A. Rossi, Handong Zhao, Ruiyi Zhang, Puneet Mathur, Nedim Lipka, Yu Wang, Trung Bui, Franck Dernoncourt, Tianyi Zhou,
- Abstract summary: Existing LLM agent systems typically select actions from a fixed and predefined set at every step.<n>We propose an LLM agent framework that can dynamically create and compose actions as needed.<n>In this framework, the agent interacts with its environment by generating and executing programs written in a general-purpose programming language.
- Score: 108.75187263724838
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Existing LLM agent systems typically select actions from a fixed and predefined set at every step. While this approach is effective in closed, narrowly scoped environments, it presents two major challenges for real-world, open-ended scenarios: (1) it significantly restricts the planning and acting capabilities of LLM agents, and (2) it requires substantial human effort to enumerate and implement all possible actions, which is impractical in complex environments with a vast number of potential actions. To address these limitations, we propose an LLM agent framework that can dynamically create and compose actions as needed. In this framework, the agent interacts with its environment by generating and executing programs written in a general-purpose programming language. Moreover, generated actions are accumulated over time for future reuse. Our extensive experiments across multiple benchmarks show that this framework significantly improves flexibility and outperforms prior methods that rely on a fixed action set. Notably, it enables LLM agents to adapt and recover in scenarios where predefined actions are insufficient or fail due to unforeseen edge cases. Our code can be found in https://github.com/adobe-research/dynasaur.
Related papers
- Enhancing LLM-Based Agents via Global Planning and Hierarchical Execution [18.68431625184045]
GoalAct is a novel agent framework that introduces a continuously updated global planning mechanism and integrates a hierarchical execution strategy.<n>GoalAct decomposes task execution into high-level skills, including searching, coding, writing and more.<n>We evaluate GoalAct on LegalAgentBench, a benchmark with multiple types of legal tasks that require the use of multiple types of tools.
arXiv Detail & Related papers (2025-04-23T09:43:40Z) - SynWorld: Virtual Scenario Synthesis for Agentic Action Knowledge Refinement [81.30121762971473]
SynWorld is a framework that allows agents to autonomously explore environments, optimize, and enhance their understanding of actions.
Our experiments demonstrate that SynWorld is an effective and general approach to learning action knowledge in new environments.
arXiv Detail & Related papers (2025-04-04T16:10:57Z) - Parallelized Planning-Acting for Efficient LLM-based Multi-Agent Systems [31.894636711684523]
We propose a novel parallelized planning-acting framework for Multi-Agent Systems.<n>The proposed framework features a dual-thread architecture with interruptible execution to enable concurrent planning and acting.
arXiv Detail & Related papers (2025-03-05T13:53:10Z) - Scaling Autonomous Agents via Automatic Reward Modeling And Planning [52.39395405893965]
Large language models (LLMs) have demonstrated remarkable capabilities across a range of tasks.<n>However, they still struggle with problems requiring multi-step decision-making and environmental feedback.<n>We propose a framework that can automatically learn a reward model from the environment without human annotations.
arXiv Detail & Related papers (2025-02-17T18:49:25Z) - Talk Structurally, Act Hierarchically: A Collaborative Framework for LLM Multi-Agent Systems [10.67359331022116]
textitTalk Structurally, Act Hierarchically (TalkHier) is a novel framework that introduces a structured communication protocol for context-rich exchanges.
textitTalkHier surpasses various types of SoTA, including inference scaling model (OpenAI-o1), open-source multi-agent models (e.g., AgentVerse)
arXiv Detail & Related papers (2025-02-16T12:26:58Z) - ML-SceGen: A Multi-level Scenario Generation Framework [1.249418440326334]
We seek to build a three-stage framework that lets users regain controllability over the generated scenarios.<n>We use Answer Set Programming (ASP) solver Clingo to help us generate comprehensive logical traffic within intersections.
arXiv Detail & Related papers (2025-01-18T14:43:40Z) - MALMM: Multi-Agent Large Language Models for Zero-Shot Robotics Manipulation [52.739500459903724]
Large Language Models (LLMs) have demonstrated remarkable planning abilities across various domains, including robotics manipulation and navigation.
We propose a novel multi-agent LLM framework that distributes high-level planning and low-level control code generation across specialized LLM agents.
We evaluate our approach on nine RLBench tasks, including long-horizon tasks, and demonstrate its ability to solve robotics manipulation in a zero-shot setting.
arXiv Detail & Related papers (2024-11-26T17:53:44Z) - AgentOccam: A Simple Yet Strong Baseline for LLM-Based Web Agents [52.13695464678006]
This study enhances an LLM-based web agent by simply refining its observation and action space.
AgentOccam surpasses the previous state-of-the-art and concurrent work by 9.8 (+29.4%) and 5.9 (+15.8%) absolute points respectively.
arXiv Detail & Related papers (2024-10-17T17:50:38Z) - Language Models can Infer Action Semantics for Symbolic Planners from Environment Feedback [26.03718733867297]
We propose Predicting Semantics of Actions with Language Models (PSALM)
PSALM learns action semantics by leveraging the strengths of both symbolic planners and Large Language Models (LLMs)
Experiments show PSALM boosts plan success rate from 36.4% (on Claude-3.5) to 100%, and explores the environment more efficiently than prior work to infer ground truth domain action semantics.
arXiv Detail & Related papers (2024-06-04T21:29:56Z) - RAP: Retrieval-Augmented Planning with Contextual Memory for Multimodal
LLM Agents [7.773304246142602]
Retrieval-Augmented Planning (RAP) framework designed to dynamically leverage past experiences corresponding to current situation and context.
RAP distinguishes itself by being versatile: it excels in both text-only and multimodal environments.
Empirical evaluations demonstrate RAP's effectiveness, where it achieves SOTA performance in textual scenarios.
arXiv Detail & Related papers (2024-02-06T00:53:27Z) - Executable Code Actions Elicit Better LLM Agents [76.95566120678787]
This work proposes to use Python code to consolidate Large Language Model (LLM) agents' actions into a unified action space (CodeAct)
integrated with a Python interpreter, CodeAct can execute code actions and dynamically revise prior actions or emit new actions upon new observations through multi-turn interactions.
The encouraging performance of CodeAct motivates us to build an open-source LLM agent that interacts with environments by executing interpretable code and collaborates with users using natural language.
arXiv Detail & Related papers (2024-02-01T21:38:58Z) - Formally Specifying the High-Level Behavior of LLM-Based Agents [24.645319505305316]
LLMs have emerged as promising tools for solving challenging problems without the need for task-specific finetuned models.
Currently, the design and implementation of such agents is ad hoc, as the wide variety of tasks that LLM-based agents may be applied to naturally means there can be no one-size-fits-all approach to agent design.
We propose a minimalistic generation framework that simplifies the process of building agents.
arXiv Detail & Related papers (2023-10-12T17:24:15Z) - LASER: LLM Agent with State-Space Exploration for Web Navigation [57.802977310392755]
Large language models (LLMs) have been successfully adapted for interactive decision-making tasks like web navigation.
Previous methods implicitly assume a forward-only execution mode for the model, where they only provide oracle trajectories as in-context examples.
We propose to model the interactive task as state space exploration, where the LLM agent transitions among a pre-defined set of states by performing actions to complete the task.
arXiv Detail & Related papers (2023-09-15T05:44:08Z) - AgentBench: Evaluating LLMs as Agents [88.45506148281379]
Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks.
We present AgentBench, a benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities.
arXiv Detail & Related papers (2023-08-07T16:08:11Z) - Generating Executable Action Plans with Environmentally-Aware Language
Models [4.162663632560141]
Large Language Models (LLMs) trained using massive text datasets have recently shown promise in generating action plans for robotic agents.
We propose an approach to generate environmentally-aware action plans that agents are better able to execute.
arXiv Detail & Related papers (2022-10-10T18:56:57Z) - Language Models as Zero-Shot Planners: Extracting Actionable Knowledge
for Embodied Agents [111.33545170562337]
We investigate the possibility of grounding high-level tasks, expressed in natural language, to a chosen set of actionable steps.
We find that if pre-trained LMs are large enough and prompted appropriately, they can effectively decompose high-level tasks into low-level plans.
We propose a procedure that conditions on existing demonstrations and semantically translates the plans to admissible actions.
arXiv Detail & Related papers (2022-01-18T18:59:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.