Designing Behavior Trees from Goal-Oriented LTLf Formulas
- URL: http://arxiv.org/abs/2307.06399v2
- Date: Tue, 19 Dec 2023 16:11:05 GMT
- Title: Designing Behavior Trees from Goal-Oriented LTLf Formulas
- Authors: Aadesh Neupane, Eric G Mercer, Michael A. Goodrich
- Abstract summary: This paper shows how to turn goals specified using a subset of Linear Temporal Logic (LTL) into a behavior tree (BT)
BT guarantees that successful traces satisfy the goal.
- Score: 3.3674998206524465
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Temporal logic can be used to formally specify autonomous agent goals, but
synthesizing planners that guarantee goal satisfaction can be computationally
prohibitive. This paper shows how to turn goals specified using a subset of
finite trace Linear Temporal Logic (LTL) into a behavior tree (BT) that
guarantees that successful traces satisfy the LTL goal. Useful LTL formulas for
achievement goals can be derived using achievement-oriented task mission
grammars, leading to missions made up of tasks combined using LTL operators.
Constructing BTs from LTL formulas leads to a relaxed behavior synthesis
problem in which a wide range of planners can implement the action nodes in the
BT. Importantly, any successful trace induced by the planners satisfies the
corresponding LTL formula. The usefulness of the approach is demonstrated in
two ways: a) exploring the alignment between two planners and LTL goals, and b)
solving a sequential key-door problem for a Fetch robot.
Related papers
- SELP: Generating Safe and Efficient Task Plans for Robot Agents with Large Language Models [24.22168861692322]
We present three key insights, equivalence voting, constrained decoding, and domain-specific fine-tuning.
Equivalence voting ensures consistency by generating and sampling multiple Linear Temporal Logic (LTL) formulas.
Constrained decoding then uses the generated formula to enforce the autoregressive inference of plans.
Domain-specific fine-tuning customizes LLMs to produce safe and efficient plans within specific task domains.
arXiv Detail & Related papers (2024-09-28T22:33:44Z) - Unlocking Reasoning Potential in Large Langauge Models by Scaling Code-form Planning [94.76546523689113]
We introduce CodePlan, a framework that generates and follows textcode-form plans -- pseudocode that outlines high-level, structured reasoning processes.
CodePlan effectively captures the rich semantics and control flows inherent to sophisticated reasoning tasks.
It achieves a 25.1% relative improvement compared with directly generating responses.
arXiv Detail & Related papers (2024-09-19T04:13:58Z) - Directed Exploration in Reinforcement Learning from Linear Temporal Logic [59.707408697394534]
Linear temporal logic (LTL) is a powerful language for task specification in reinforcement learning.
We show that the synthesized reward signal remains fundamentally sparse, making exploration challenging.
We show how better exploration can be achieved by further leveraging the specification and casting its corresponding Limit Deterministic B"uchi Automaton (LDBA) as a Markov reward process.
arXiv Detail & Related papers (2024-08-18T14:25:44Z) - Scaling Up Natural Language Understanding for Multi-Robots Through the Lens of Hierarchy [8.180994118420053]
Long-horizon planning is hindered by challenges such as uncertainty accumulation, computational complexity, delayed rewards and incomplete information.
This work proposes an approach to exploit the task hierarchy from human instructions to facilitate multi-robot planning.
arXiv Detail & Related papers (2024-08-15T14:46:13Z) - ADaPT: As-Needed Decomposition and Planning with Language Models [131.063805299796]
We introduce As-Needed Decomposition and Planning for complex Tasks (ADaPT)
ADaPT explicitly plans and decomposes complex sub-tasks as-needed, when the Large Language Models is unable to execute them.
Our results demonstrate that ADaPT substantially outperforms established strong baselines.
arXiv Detail & Related papers (2023-11-08T17:59:15Z) - Tree-Planner: Efficient Close-loop Task Planning with Large Language Models [63.06270302774049]
Tree-Planner reframes task planning with Large Language Models into three distinct phases.
Tree-Planner achieves state-of-the-art performance while maintaining high efficiency.
arXiv Detail & Related papers (2023-10-12T17:59:50Z) - Conformal Temporal Logic Planning using Large Language Models [27.571083913525563]
We consider missions that require accomplishing multiple high-level sub-tasks expressed in natural language (NL), in a temporal and logical order.
Our goal is to design plans, defined as sequences of robot actions, accomplishing-NL tasks.
We propose HERACLEs, a hierarchical neuro-symbolic planner that relies on a novel integration of existing symbolic planners.
arXiv Detail & Related papers (2023-09-18T19:05:25Z) - Towards Unified Token Learning for Vision-Language Tracking [65.96561538356315]
We present a vision-language (VL) tracking pipeline, termed textbfMMTrack, which casts VL tracking as a token generation task.
Our proposed framework serializes language description and bounding box into a sequence of discrete tokens.
In this new design paradigm, all token queries are required to perceive the desired target and directly predict spatial coordinates of the target.
arXiv Detail & Related papers (2023-08-27T13:17:34Z) - Planning for Temporally Extended Goals in Pure-Past Linear Temporal
Logic: A Polynomial Reduction to Standard Planning [24.40306100502023]
We study temporally extended goals expressed in Pure-Past (PPLTL)
We devise a technique to translate planning for PPLTL goals into standard planning.
Our translation enables state-of-the-art tools, such as FD or MyND, to handle PPLTL goals seamlessly.
arXiv Detail & Related papers (2022-04-21T08:34:49Z) - Certified Reinforcement Learning with Logic Guidance [78.2286146954051]
We propose a model-free RL algorithm that enables the use of Linear Temporal Logic (LTL) to formulate a goal for unknown continuous-state/action Markov Decision Processes (MDPs)
The algorithm is guaranteed to synthesise a control policy whose traces satisfy the specification with maximal probability.
arXiv Detail & Related papers (2019-02-02T20:09:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.