Learning and Executing Re-usable Behaviour Trees from Natural Language
Instruction
- URL: http://arxiv.org/abs/2106.01650v1
- Date: Thu, 3 Jun 2021 07:47:06 GMT
- Title: Learning and Executing Re-usable Behaviour Trees from Natural Language
Instruction
- Authors: Gavin Suddrey, Ben Talbot and Frederic Maire
- Abstract summary: Behaviour trees can be used in conjunction with natural language instruction to provide a robust and modular control architecture.
We show how behaviour trees generated using our approach can be generalised to novel scenarios.
We validate this work against an existing corpus of natural language instructions.
- Score: 1.4824891788575418
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Domestic and service robots have the potential to transform industries such
as health care and small-scale manufacturing, as well as the homes in which we
live. However, due to the overwhelming variety of tasks these robots will be
expected to complete, providing generic out-of-the-box solutions that meet the
needs of every possible user is clearly intractable. To address this problem,
robots must therefore not only be capable of learning how to complete novel
tasks at run-time, but the solutions to these tasks must also be informed by
the needs of the user. In this paper we demonstrate how behaviour trees, a well
established control architecture in the fields of gaming and robotics, can be
used in conjunction with natural language instruction to provide a robust and
modular control architecture for instructing autonomous agents to learn and
perform novel complex tasks. We also show how behaviour trees generated using
our approach can be generalised to novel scenarios, and can be re-used in
future learning episodes to create increasingly complex behaviours. We validate
this work against an existing corpus of natural language instructions,
demonstrate the application of our approach on both a simulated robot solving a
toy problem, as well as two distinct real-world robot platforms which,
respectively, complete a block sorting scenario, and a patrol scenario.
Related papers
- Commonsense Reasoning for Legged Robot Adaptation with Vision-Language Models [81.55156507635286]
Legged robots are physically capable of navigating a diverse variety of environments and overcoming a wide range of obstructions.
Current learning methods often struggle with generalization to the long tail of unexpected situations without heavy human supervision.
We propose a system, VLM-Predictive Control (VLM-PC), combining two key components that we find to be crucial for eliciting on-the-fly, adaptive behavior selection.
arXiv Detail & Related papers (2024-07-02T21:00:30Z) - RoboScript: Code Generation for Free-Form Manipulation Tasks across Real
and Simulation [77.41969287400977]
This paper presents textbfRobotScript, a platform for a deployable robot manipulation pipeline powered by code generation.
We also present a benchmark for a code generation benchmark for robot manipulation tasks in free-form natural language.
We demonstrate the adaptability of our code generation framework across multiple robot embodiments, including the Franka and UR5 robot arms.
arXiv Detail & Related papers (2024-02-22T15:12:00Z) - RoboGen: Towards Unleashing Infinite Data for Automated Robot Learning via Generative Simulation [68.70755196744533]
RoboGen is a generative robotic agent that automatically learns diverse robotic skills at scale via generative simulation.
Our work attempts to extract the extensive and versatile knowledge embedded in large-scale models and transfer them to the field of robotics.
arXiv Detail & Related papers (2023-11-02T17:59:21Z) - Towards a Causal Probabilistic Framework for Prediction,
Action-Selection & Explanations for Robot Block-Stacking Tasks [4.244706520140677]
Causal models provide a principled framework to encode formal knowledge of the causal relationships that govern the robot's interaction with its environment.
We propose a novel causal probabilistic framework to embed a physics simulation capability into a structural causal model to permit robots to perceive and assess the current state of a block-stacking task.
arXiv Detail & Related papers (2023-08-11T15:58:15Z) - ProgPrompt: Generating Situated Robot Task Plans using Large Language
Models [68.57918965060787]
Large language models (LLMs) can be used to score potential next actions during task planning.
We present a programmatic LLM prompt structure that enables plan generation functional across situated environments.
arXiv Detail & Related papers (2022-09-22T20:29:49Z) - Lifelong Robotic Reinforcement Learning by Retaining Experiences [61.79346922421323]
Many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times.
In this work, we study a practical sequential multi-task RL problem motivated by the practical constraints of physical robotic systems.
We derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set.
arXiv Detail & Related papers (2021-09-19T18:00:51Z) - Behavior coordination for self-adaptive robots using constraint-based
configuration [0.0]
This paper presents an original algorithm to dynamically configure the control architecture of self-adaptive robots.
The algorithm uses a constraint-based configuration approach to decide which basic robot behaviors should be activated in response to both reactive and deliberative events.
The solution has been implemented as a software development tool called Behavior Coordinator CBC, which is based on ROS and open source.
arXiv Detail & Related papers (2021-03-24T12:09:44Z) - Combining Planning and Learning of Behavior Trees for Robotic Assembly [0.9262157005505219]
We propose a method for generating Behavior Trees using a Genetic Programming algorithm.
We show that this type of high level learning of Behavior Trees can be transferred to a real system without further training.
arXiv Detail & Related papers (2021-03-16T13:11:39Z) - myGym: Modular Toolkit for Visuomotor Robotic Tasks [0.0]
myGym is a novel virtual robotic toolkit developed for reinforcement learning (RL), intrinsic motivation and imitation learning tasks trained in a 3D simulator.
The modular structure of the simulator enables users to train and validate their algorithms on a large number of scenarios with various robots, environments and tasks.
The toolkit provides pretrained visual modules for visuomotor tasks allowing rapid prototyping, and, moreover, users can customize the visual submodules and retrain with their own set of objects.
arXiv Detail & Related papers (2020-12-21T19:15:05Z) - Learning Behavior Trees with Genetic Programming in Unpredictable
Environments [7.839247285151348]
We show that genetic programming can be effectively used to learn the structure of a behavior tree.
We demonstrate that the learned BTs can solve the same task in a realistic simulator, reaching convergence without the need for task specifics.
arXiv Detail & Related papers (2020-11-06T09:28:23Z) - SAPIEN: A SimulAted Part-based Interactive ENvironment [77.4739790629284]
SAPIEN is a realistic and physics-rich simulated environment that hosts a large-scale set for articulated objects.
We evaluate state-of-the-art vision algorithms for part detection and motion attribute recognition as well as demonstrate robotic interaction tasks.
arXiv Detail & Related papers (2020-03-19T00:11:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.