From Abstractions to Grounded Languages for Robust Coordination of Task
Planning Robots
- URL: http://arxiv.org/abs/1905.00517v3
- Date: Thu, 22 Feb 2024 23:07:35 GMT
- Title: From Abstractions to Grounded Languages for Robust Coordination of Task
Planning Robots
- Authors: Yu Zhang
- Abstract summary: We study the automatic construction of languages that are maximally flexible while being sufficiently explicative for coordination.
Our language expresses a plan for any given task as a "plan sketch" to convey just-enough details while maximizing the flexibility to realize it.
- Score: 4.496989927037321
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we consider a first step to bridge a gap in coordinating task
planning robots. Specifically, we study the automatic construction of languages
that are maximally flexible while being sufficiently explicative for
coordination. To this end, we view language as a machinery for specifying
temporal-state constraints of plans. Such a view enables us to reverse-engineer
a language from the ground up by mapping these composable constraints to words.
Our language expresses a plan for any given task as a "plan sketch" to convey
just-enough details while maximizing the flexibility to realize it, leading to
robust coordination with optimality guarantees among other benefits. We
formulate and analyze the problem, provide an approximate solution, and
validate the advantages of our approach under various scenarios to shed light
on its applications.
Related papers
- Joint Verification and Refinement of Language Models for Safety-Constrained Planning [21.95203475140736]
We develop a method to generate executable plans and formally verify them against task-relevant safety specifications.
Given a high-level task description in natural language, the proposed method queries a language model to generate plans in the form of executable robot programs.
It then converts the generated plan into an automaton-based representation, allowing formal verification of the automaton against the specifications.
arXiv Detail & Related papers (2024-10-18T21:16:30Z) - Safe Task Planning for Language-Instructed Multi-Robot Systems using Conformal Prediction [11.614036749291216]
We introduce a new distributed multi-robot planner, S-ATLAS for Safe plAnning for Teams of Language-instructed AgentS, that is capable of achieving user-defined mission success rates.
We show, both theoretically and empirically, that the proposed planner can achieve user-specified task success rates while minimizing the overall number of help requests.
arXiv Detail & Related papers (2024-02-23T15:02:44Z) - Unified Task and Motion Planning using Object-centric Abstractions of
Motion Constraints [56.283944756315066]
We propose an alternative TAMP approach that unifies task and motion planning into a single search.
Our approach is based on an object-centric abstraction of motion constraints that permits leveraging the computational efficiency of off-the-shelf AI search to yield physically feasible plans.
arXiv Detail & Related papers (2023-12-29T14:00:20Z) - Learning adaptive planning representations with natural language
guidance [90.24449752926866]
This paper describes Ada, a framework for automatically constructing task-specific planning representations.
Ada interactively learns a library of planner-compatible high-level action abstractions and low-level controllers adapted to a particular domain of planning tasks.
arXiv Detail & Related papers (2023-12-13T23:35:31Z) - Planning as In-Painting: A Diffusion-Based Embodied Task Planning
Framework for Environments under Uncertainty [56.30846158280031]
Task planning for embodied AI has been one of the most challenging problems.
We propose a task-agnostic method named 'planning as in-painting'
The proposed framework achieves promising performances in various embodied AI tasks.
arXiv Detail & Related papers (2023-12-02T10:07:17Z) - Interactive Task Planning with Language Models [97.86399877812923]
An interactive robot framework accomplishes long-horizon task planning and can easily generalize to new goals or distinct tasks, even during execution.
Recent large language model based approaches can allow for more open-ended planning but often require heavy prompt engineering or domain-specific pretrained models.
We propose a simple framework that achieves interactive task planning with language models.
arXiv Detail & Related papers (2023-10-16T17:59:12Z) - $\mu$PLAN: Summarizing using a Content Plan as Cross-Lingual Bridge [72.64847925450368]
Cross-lingual summarization consists of generating a summary in one language given an input document in a different language.
This work presents $mu$PLAN, an approach to cross-lingual summarization that uses an intermediate planning step as a cross-lingual bridge.
arXiv Detail & Related papers (2023-05-23T16:25:21Z) - Multimodal Contextualized Plan Prediction for Embodied Task Completion [9.659463406886301]
Task planning is an important component of traditional robotics systems enabling robots to compose fine grained skills to perform more complex tasks.
Recent work building systems for translating natural language to executable actions for task completion in simulated embodied agents is focused on directly predicting low level action sequences.
We focus on predicting a higher level plan representation for one such embodied task completion dataset - TEACh.
arXiv Detail & Related papers (2023-05-10T22:29:12Z) - ProgPrompt: Generating Situated Robot Task Plans using Large Language
Models [68.57918965060787]
Large language models (LLMs) can be used to score potential next actions during task planning.
We present a programmatic LLM prompt structure that enables plan generation functional across situated environments.
arXiv Detail & Related papers (2022-09-22T20:29:49Z) - GoalNet: Inferring Conjunctive Goal Predicates from Human Plan
Demonstrations for Robot Instruction Following [15.405156791794191]
Our goal is to enable a robot to learn how to sequence its actions to perform tasks specified as natural language instructions.
We introduce a novel neuro-symbolic model, GoalNet, for contextual and task dependent inference of goal predicates.
GoalNet demonstrates a significant improvement (51%) in the task completion rate in comparison to a state-of-the-art rule-based approach.
arXiv Detail & Related papers (2022-05-14T15:14:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.