Enforcing Temporal Constraints on Generative Agent Behavior with
Reactive Synthesis
- URL: http://arxiv.org/abs/2402.16905v1
- Date: Sat, 24 Feb 2024 21:36:26 GMT
- Title: Enforcing Temporal Constraints on Generative Agent Behavior with
Reactive Synthesis
- Authors: Raven Rothkopf, Hannah Tongxin Zeng, Mark Santolucito
- Abstract summary: We propose a combination of formal logic-based program synthesis and Large Language Models to create generative agents.
Our approach uses Temporal Stream Logic (TSL) to generate an automaton that enforces a temporal structure on an agent.
We evaluate our approach on different tasks involved in creating a coherent interactive agent specialized for various application domains.
- Score: 1.1110995501996483
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The surge in popularity of Large Language Models (LLMs) has opened doors for
new approaches to the creation of interactive agents. However, managing the
temporal behavior of such agents over the course of an interaction remains
challenging. The stateful, long-term horizon and quantitative reasoning
required for coherent agent behavior does not fit well into the LLM paradigm.
We propose a combination of formal logic-based program synthesis and LLM
content generation to create generative agents that adhere to temporal
constraints. Our approach uses Temporal Stream Logic (TSL) to generate an
automaton that enforces a temporal structure on an agent and leaves the details
of each action for a moment in time to an LLM. By using TSL, we are able to
augment the generative agent where users have a higher level of guarantees on
behavior, better interpretability of the system, and more ability to build
agents in a modular way. We evaluate our approach on different tasks involved
in creating a coherent interactive agent specialized for various application
domains. We found that over all of the tasks, our approach using TSL achieves
at least 96% adherence, whereas the pure LLM-based approach demonstrates as low
as 14.67% adherence.
Related papers
- Gödel Agent: A Self-Referential Agent Framework for Recursive Self-Improvement [117.94654815220404]
G"odel Agent is a self-evolving framework inspired by the G"odel machine.
G"odel Agent can achieve continuous self-improvement, surpassing manually crafted agents in performance, efficiency, and generalizability.
arXiv Detail & Related papers (2024-10-06T10:49:40Z) - Beyond Prompts: Dynamic Conversational Benchmarking of Large Language Models [0.0]
We introduce a dynamic benchmarking system for conversational agents that evaluates their performance through a single, simulated, and lengthy user interaction.
We context switch regularly to interleave the tasks, which constructs a realistic testing scenario in which we assess the Long-Term Memory, Continual Learning, and Information Integration capabilities of the agents.
arXiv Detail & Related papers (2024-09-30T12:01:29Z) - Textualized Agent-Style Reasoning for Complex Tasks by Multiple Round LLM Generation [49.27250832754313]
We present AgentCOT, a llm-based autonomous agent framework.
At each step, AgentCOT selects an action and executes it to yield an intermediate result with supporting evidence.
We introduce two new strategies to enhance the performance of AgentCOT.
arXiv Detail & Related papers (2024-09-19T02:20:06Z) - Compromising Embodied Agents with Contextual Backdoor Attacks [69.71630408822767]
Large language models (LLMs) have transformed the development of embodied intelligence.
This paper uncovers a significant backdoor security threat within this process.
By poisoning just a few contextual demonstrations, attackers can covertly compromise the contextual environment of a black-box LLM.
arXiv Detail & Related papers (2024-08-06T01:20:12Z) - Hello Again! LLM-powered Personalized Agent for Long-term Dialogue [63.65128176360345]
We introduce a model-agnostic framework, the Long-term Dialogue Agent (LD-Agent)
It incorporates three independently tunable modules dedicated to event perception, persona extraction, and response generation.
The effectiveness, generality, and cross-domain capabilities of LD-Agent are empirically demonstrated.
arXiv Detail & Related papers (2024-06-09T21:58:32Z) - KnowAgent: Knowledge-Augmented Planning for LLM-Based Agents [54.09074527006576]
Large Language Models (LLMs) have demonstrated great potential in complex reasoning tasks, yet they fall short when tackling more sophisticated challenges.
This inadequacy primarily stems from the lack of built-in action knowledge in language agents.
We introduce KnowAgent, a novel approach designed to enhance the planning capabilities of LLMs by incorporating explicit action knowledge.
arXiv Detail & Related papers (2024-03-05T16:39:12Z) - Affordable Generative Agents [16.372072265248192]
Affordable Generative Agents (AGA) is a framework for enabling the generation of believable and low-cost interactions on both agent-environment and inter-agents levels.
Our code is publicly available at: https://github.com/AffordableGenerativeAgents/Affordable-Generative-Agents.
arXiv Detail & Related papers (2024-02-03T06:16:28Z) - LLM-Powered Hierarchical Language Agent for Real-time Human-AI
Coordination [28.22553394518179]
We propose a Hierarchical Language Agent (HLA) for human-AI coordination.
HLA provides both strong reasoning abilities while keeping real-time execution.
Human studies show that HLA outperforms other baseline agents, including slow-mind-only agents and fast-mind-only agents.
arXiv Detail & Related papers (2023-12-23T11:09:48Z) - Formally Specifying the High-Level Behavior of LLM-Based Agents [24.645319505305316]
LLMs have emerged as promising tools for solving challenging problems without the need for task-specific finetuned models.
Currently, the design and implementation of such agents is ad hoc, as the wide variety of tasks that LLM-based agents may be applied to naturally means there can be no one-size-fits-all approach to agent design.
We propose a minimalistic generation framework that simplifies the process of building agents.
arXiv Detail & Related papers (2023-10-12T17:24:15Z) - AgentBench: Evaluating LLMs as Agents [88.45506148281379]
Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks.
We present AgentBench, a benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities.
arXiv Detail & Related papers (2023-08-07T16:08:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.