GoalNet: Inferring Conjunctive Goal Predicates from Human Plan
Demonstrations for Robot Instruction Following
- URL: http://arxiv.org/abs/2205.07081v1
- Date: Sat, 14 May 2022 15:14:40 GMT
- Title: GoalNet: Inferring Conjunctive Goal Predicates from Human Plan
Demonstrations for Robot Instruction Following
- Authors: Shreya Sharma, Jigyasa Gupta, Shreshth Tuli, Rohan Paul and Mausam
- Abstract summary: Our goal is to enable a robot to learn how to sequence its actions to perform tasks specified as natural language instructions.
We introduce a novel neuro-symbolic model, GoalNet, for contextual and task dependent inference of goal predicates.
GoalNet demonstrates a significant improvement (51%) in the task completion rate in comparison to a state-of-the-art rule-based approach.
- Score: 15.405156791794191
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Our goal is to enable a robot to learn how to sequence its actions to perform
tasks specified as natural language instructions, given successful
demonstrations from a human partner. The ability to plan high-level tasks can
be factored as (i) inferring specific goal predicates that characterize the
task implied by a language instruction for a given world state and (ii)
synthesizing a feasible goal-reaching action-sequence with such predicates. For
the former, we leverage a neural network prediction model, while utilizing a
symbolic planner for the latter. We introduce a novel neuro-symbolic model,
GoalNet, for contextual and task dependent inference of goal predicates from
human demonstrations and linguistic task descriptions. GoalNet combines (i)
learning, where dense representations are acquired for language instruction and
the world state that enables generalization to novel settings and (ii)
planning, where the cause-effect modeling by the symbolic planner eschews
irrelevant predicates facilitating multi-stage decision making in large
domains. GoalNet demonstrates a significant improvement (51%) in the task
completion rate in comparison to a state-of-the-art rule-based approach on a
benchmark data set displaying linguistic variations, particularly for
multi-stage instructions.
Related papers
- Integrating Self-supervised Speech Model with Pseudo Word-level Targets
from Visually-grounded Speech Model [57.78191634042409]
We propose Pseudo-Word HuBERT (PW-HuBERT), a framework that integrates pseudo word-level targets into the training process.
Our experimental results on four spoken language understanding (SLU) benchmarks suggest the superiority of our model in capturing semantic information.
arXiv Detail & Related papers (2024-02-08T16:55:21Z) - Learning adaptive planning representations with natural language
guidance [90.24449752926866]
This paper describes Ada, a framework for automatically constructing task-specific planning representations.
Ada interactively learns a library of planner-compatible high-level action abstractions and low-level controllers adapted to a particular domain of planning tasks.
arXiv Detail & Related papers (2023-12-13T23:35:31Z) - Conformal Temporal Logic Planning using Large Language Models [27.571083913525563]
We consider missions that require accomplishing multiple high-level sub-tasks expressed in natural language (NL), in a temporal and logical order.
Our goal is to design plans, defined as sequences of robot actions, accomplishing-NL tasks.
We propose HERACLEs, a hierarchical neuro-symbolic planner that relies on a novel integration of existing symbolic planners.
arXiv Detail & Related papers (2023-09-18T19:05:25Z) - A Computational Interface to Translate Strategic Intent from
Unstructured Language in a Low-Data Setting [7.2466963932212245]
We build a computational interface capable of translating unstructured language strategies into actionable intent in the form of goals and constraints.
We collect a dataset of over 1000 examples, mapping language strategies to the corresponding goals and constraints, and show that our model, trained on this dataset, significantly outperforms human interpreters.
arXiv Detail & Related papers (2022-08-17T16:11:07Z) - Few-shot Subgoal Planning with Language Models [58.11102061150875]
We show that language priors encoded in pre-trained language models allow us to infer fine-grained subgoal sequences.
In contrast to recent methods which make strong assumptions about subgoal supervision, our experiments show that language models can infer detailed subgoal sequences without any fine-tuning.
arXiv Detail & Related papers (2022-05-28T01:03:30Z) - Context-Aware Language Modeling for Goal-Oriented Dialogue Systems [84.65707332816353]
We formulate goal-oriented dialogue as a partially observed Markov decision process.
We derive a simple and effective method to finetune language models in a goal-aware way.
We evaluate our method on a practical flight-booking task using AirDialogue.
arXiv Detail & Related papers (2022-04-18T17:23:11Z) - Skill Induction and Planning with Latent Language [94.55783888325165]
We formulate a generative model of action sequences in which goals generate sequences of high-level subtask descriptions.
We describe how to train this model using primarily unannotated demonstrations by parsing demonstrations into sequences of named high-level subtasks.
In trained models, the space of natural language commands indexes a library of skills; agents can use these skills to plan by generating high-level instruction sequences tailored to novel goals.
arXiv Detail & Related papers (2021-10-04T15:36:32Z) - Learning Language-Conditioned Robot Behavior from Offline Data and
Crowd-Sourced Annotation [80.29069988090912]
We study the problem of learning a range of vision-based manipulation tasks from a large offline dataset of robot interaction.
We propose to leverage offline robot datasets with crowd-sourced natural language labels.
We find that our approach outperforms both goal-image specifications and language conditioned imitation techniques by more than 25%.
arXiv Detail & Related papers (2021-09-02T17:42:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.