Interactive Learning from Natural Language and Demonstrations using
Signal Temporal Logic
- URL: http://arxiv.org/abs/2207.00627v1
- Date: Fri, 1 Jul 2022 19:08:43 GMT
- Title: Interactive Learning from Natural Language and Demonstrations using
Signal Temporal Logic
- Authors: Sara Mohammadinejad, Jesse Thomason, Jyotirmoy V. Deshmukh
- Abstract summary: Natural language (NL) is ambiguous, real world tasks and their safety requirements need to be communicated unambiguously.
Signal Temporal Logic (STL) is a formal logic that can serve as a versatile, expressive, and unambiguous formal language to describe robotic tasks.
We propose DIALOGUESTL, an interactive approach for learning correct and concise STL formulas from (often) ambiguous NL descriptions.
- Score: 5.88797764615148
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Natural language is an intuitive way for humans to communicate tasks to a
robot. While natural language (NL) is ambiguous, real world tasks and their
safety requirements need to be communicated unambiguously. Signal Temporal
Logic (STL) is a formal logic that can serve as a versatile, expressive, and
unambiguous formal language to describe robotic tasks. On one hand, existing
work in using STL for the robotics domain typically requires end-users to
express task specifications in STL, a challenge for non-expert users.
On the other, translating from NL to STL specifications is currently
restricted to specific fragments. In this work, we propose DIALOGUESTL, an
interactive approach for learning correct and concise STL formulas from (often)
ambiguous NL descriptions. We use a combination of semantic parsing,
pre-trained transformer-based language models, and user-in-the-loop
clarifications aided by a small number of user demonstrations to predict the
best STL formula to encode NL task descriptions. An advantage of mapping NL to
STL is that there has been considerable recent work on the use of reinforcement
learning (RL) to identify control policies for robots. We show we can use Deep
Q-Learning techniques to learn optimal policies from the learned STL
specifications. We demonstrate that DIALOGUESTL is efficient, scalable, and
robust, and has high accuracy in predicting the correct STL formula with a few
number of demonstrations and a few interactions with an oracle user.
Related papers
- CoT-TL: Low-Resource Temporal Knowledge Representation of Planning Instructions Using Chain-of-Thought Reasoning [0.0]
CoT-TL is a data-efficient in-context learning framework for translating natural language specifications into representations.
CoT-TL achieves state-of-the-art accuracy across three diverse datasets in low-data scenarios.
arXiv Detail & Related papers (2024-10-21T17:10:43Z) - Large Language Models are Interpretable Learners [53.56735770834617]
In this paper, we show a combination of Large Language Models (LLMs) and symbolic programs can bridge the gap between expressiveness and interpretability.
The pretrained LLM with natural language prompts provides a massive set of interpretable modules that can transform raw input into natural language concepts.
As the knowledge learned by LSP is a combination of natural language descriptions and symbolic rules, it is easily transferable to humans (interpretable) and other LLMs.
arXiv Detail & Related papers (2024-06-25T02:18:15Z) - Natural Language as Policies: Reasoning for Coordinate-Level Embodied Control with LLMs [7.746160514029531]
We demonstrate experimental results with LLMs that address robotics task planning problems.
Our approach acquires text descriptions of the task and scene objects, then formulates task planning through natural language reasoning.
Our approach is evaluated on a multi-modal prompt simulation benchmark.
arXiv Detail & Related papers (2024-03-20T17:58:12Z) - Interactive Planning Using Large Language Models for Partially
Observable Robotics Tasks [54.60571399091711]
Large Language Models (LLMs) have achieved impressive results in creating robotic agents for performing open vocabulary tasks.
We present an interactive planning technique for partially observable tasks using LLMs.
arXiv Detail & Related papers (2023-12-11T22:54:44Z) - Eliciting Human Preferences with Language Models [56.68637202313052]
Language models (LMs) can be directed to perform target tasks by using labeled examples or natural language prompts.
We propose to use *LMs themselves* to guide the task specification process.
We study GATE in three domains: email validation, content recommendation, and moral reasoning.
arXiv Detail & Related papers (2023-10-17T21:11:21Z) - On Conditional and Compositional Language Model Differentiable Prompting [75.76546041094436]
Prompts have been shown to be an effective method to adapt a frozen Pretrained Language Model (PLM) to perform well on downstream tasks.
We propose a new model, Prompt Production System (PRopS), which learns to transform task instructions or input metadata, into continuous prompts.
arXiv Detail & Related papers (2023-07-04T02:47:42Z) - Data-Efficient Learning of Natural Language to Linear Temporal Logic
Translators for Robot Task Specification [6.091096843566857]
We present a learning-based approach for translating from natural language commands to specifications with very limited human-labeled training data.
This is in stark contrast to existing natural-language to translators, which require large human-labeled datasets.
We show that we can translate natural language commands at 75% accuracy with far less human data.
arXiv Detail & Related papers (2023-03-09T00:09:58Z) - ProgPrompt: Generating Situated Robot Task Plans using Large Language
Models [68.57918965060787]
Large language models (LLMs) can be used to score potential next actions during task planning.
We present a programmatic LLM prompt structure that enables plan generation functional across situated environments.
arXiv Detail & Related papers (2022-09-22T20:29:49Z) - Neuro-Symbolic Causal Language Planning with Commonsense Prompting [67.06667162430118]
Language planning aims to implement complex high-level goals by decomposition into simpler low-level steps.
Previous methods require either manual exemplars or annotated programs to acquire such ability from large language models.
This paper proposes Neuro-Symbolic Causal Language Planner (CLAP) that elicits procedural knowledge from the LLMs with commonsense-infused prompting.
arXiv Detail & Related papers (2022-06-06T22:09:52Z) - Generalizing to New Domains by Mapping Natural Language to Lifted LTL [20.58567011476273]
We introduce an intermediate contextual query representation which can be learned from single positive task specification examples.
We compare our method to state-of-the-art CopyNet models capable of translating natural language.
We demonstrate that our method outputs can be used for planning in a simulated OO-MDP environment.
arXiv Detail & Related papers (2021-10-11T20:49:26Z) - Backpropagation through Signal Temporal Logic Specifications: Infusing
Logical Structure into Gradient-Based Methods [28.72161643908351]
This paper presents a technique, named STLCG, to compute the quantitative semantics of Signal Temporal Logic (STL) formulas using computation graphs.
STL is a powerful and expressive formal language that can specify spatial and temporal properties of signals generated by both continuous and hybrid systems.
arXiv Detail & Related papers (2020-07-31T22:01:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.