Behavior Trees Enable Structured Programming of Language Model Agents
- URL: http://arxiv.org/abs/2404.07439v1
- Date: Thu, 11 Apr 2024 02:44:13 GMT
- Title: Behavior Trees Enable Structured Programming of Language Model Agents
- Authors: Richard Kelley,
- Abstract summary: We argue that behavior trees provide a unifying framework for combining language models with classical AI and traditional programming.
We introduce Dendron, a Python library for programming language model agents using behavior trees.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Language models trained on internet-scale data sets have shown an impressive ability to solve problems in Natural Language Processing and Computer Vision. However, experience is showing that these models are frequently brittle in unexpected ways, and require significant scaffolding to ensure that they operate correctly in the larger systems that comprise "language-model agents." In this paper, we argue that behavior trees provide a unifying framework for combining language models with classical AI and traditional programming. We introduce Dendron, a Python library for programming language model agents using behavior trees. We demonstrate the approach embodied by Dendron in three case studies: building a chat agent, a camera-based infrastructure inspection agent for use on a mobile robot or vehicle, and an agent that has been built to satisfy safety constraints that it did not receive through instruction tuning or RLHF.
Related papers
- The Ann Arbor Architecture for Agent-Oriented Programming [6.630761601310476]
We argue that language models function as automata and, like all automata, should be programmed in the languages they accept.
We introduce the Ann Arbor Architecture, a conceptual framework for agent-oriented programming of language models.
We present the design of our agent platform Postline, and report on our initial experiments in agent training.
arXiv Detail & Related papers (2025-02-14T04:21:36Z) - A Behavior Tree-inspired programming language for autonomous agents [1.5101132008238316]
We propose a design for a functional programming language for autonomous agents, built off the ideas and motivations of Behavior Trees (BTs)
BTs are a popular model for designing agents behavior in robotics and AI.
We present a full specification for our BT-inspired language, and give an implementation in the functional programming language Haskell.
arXiv Detail & Related papers (2024-11-26T22:47:06Z) - Symbolic Learning Enables Self-Evolving Agents [55.625275970720374]
We introduce agent symbolic learning, a systematic framework that enables language agents to optimize themselves on their own.
Agent symbolic learning is designed to optimize the symbolic network within language agents by mimicking two fundamental algorithms in connectionist learning.
We conduct proof-of-concept experiments on both standard benchmarks and complex real-world tasks.
arXiv Detail & Related papers (2024-06-26T17:59:18Z) - Bootstrapping Cognitive Agents with a Large Language Model [0.9971537447334835]
Large language models contain noisy general knowledge of the world, yet are hard to train or fine-tune.
In this work, we combine the best of both worlds: bootstrapping a cognitive-based model with the noisy knowledge encoded in large language models.
arXiv Detail & Related papers (2024-02-25T01:40:30Z) - Pangu-Agent: A Fine-Tunable Generalist Agent with Structured Reasoning [50.47568731994238]
Key method for creating Artificial Intelligence (AI) agents is Reinforcement Learning (RL)
This paper presents a general framework model for integrating and learning structured reasoning into AI agents' policies.
arXiv Detail & Related papers (2023-12-22T17:57:57Z) - Agents: An Open-source Framework for Autonomous Language Agents [98.91085725608917]
We consider language agents as a promising direction towards artificial general intelligence.
We release Agents, an open-source library with the goal of opening up these advances to a wider non-specialist audience.
arXiv Detail & Related papers (2023-09-14T17:18:25Z) - Grounded Decoding: Guiding Text Generation with Grounded Models for
Embodied Agents [111.15288256221764]
Grounded-decoding project aims to solve complex, long-horizon tasks in a robotic setting by leveraging the knowledge of both models.
We frame this as a problem similar to probabilistic filtering: decode a sequence that both has high probability under the language model and high probability under a set of grounded model objectives.
We demonstrate how such grounded models can be obtained across three simulation and real-world domains, and that the proposed decoding strategy is able to solve complex, long-horizon tasks in a robotic setting by leveraging the knowledge of both models.
arXiv Detail & Related papers (2023-03-01T22:58:50Z) - Language Models are General-Purpose Interfaces [109.45478241369655]
We propose to use language models as a general-purpose interface to various foundation models.
A collection of pretrained encoders perceive diverse modalities (such as vision, and language)
We propose a semi-causal language modeling objective to jointly pretrain the interface and the modular encoders.
arXiv Detail & Related papers (2022-06-13T17:34:22Z) - Pre-Trained Language Models for Interactive Decision-Making [72.77825666035203]
We describe a framework for imitation learning in which goals and observations are represented as a sequence of embeddings.
We demonstrate that this framework enables effective generalization across different environments.
For test tasks involving novel goals or novel scenes, initializing policies with language models improves task completion rates by 43.6%.
arXiv Detail & Related papers (2022-02-03T18:55:52Z) - Language Models are not Models of Language [0.0]
Transfer learning has enabled large deep learning neural networks trained on the language modeling task to vastly improve performance.
We argue that the term language model is misleading because deep learning models are not theoretical models of language.
arXiv Detail & Related papers (2021-12-13T22:39:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.