Learning Behavior Trees with Genetic Programming in Unpredictable
Environments
- URL: http://arxiv.org/abs/2011.03252v1
- Date: Fri, 6 Nov 2020 09:28:23 GMT
- Title: Learning Behavior Trees with Genetic Programming in Unpredictable
Environments
- Authors: Matteo Iovino, Jonathan Styrud, Pietro Falco and Christian Smith
- Abstract summary: We show that genetic programming can be effectively used to learn the structure of a behavior tree.
We demonstrate that the learned BTs can solve the same task in a realistic simulator, reaching convergence without the need for task specifics.
- Score: 7.839247285151348
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern industrial applications require robots to be able to operate in
unpredictable environments, and programs to be created with a minimal effort,
as there may be frequent changes to the task. In this paper, we show that
genetic programming can be effectively used to learn the structure of a
behavior tree (BT) to solve a robotic task in an unpredictable environment.
Moreover, we propose to use a simple simulator for the learning and demonstrate
that the learned BTs can solve the same task in a realistic simulator, reaching
convergence without the need for task specific heuristics. The learned solution
is tolerant to faults, making our method appealing for real robotic
applications.
Related papers
- RoboGen: Towards Unleashing Infinite Data for Automated Robot Learning via Generative Simulation [68.70755196744533]
RoboGen is a generative robotic agent that automatically learns diverse robotic skills at scale via generative simulation.
Our work attempts to extract the extensive and versatile knowledge embedded in large-scale models and transfer them to the field of robotics.
arXiv Detail & Related papers (2023-11-02T17:59:21Z) - Learning Tool Morphology for Contact-Rich Manipulation Tasks with
Differentiable Simulation [27.462052737553055]
We present an end-to-end framework to automatically learn tool morphology for contact-rich manipulation tasks by leveraging differentiable physics simulators.
In our approach, we instead only need to define the objective with respect to the task performance and enable learning a robust morphology by randomizing the task variations.
We demonstrate the effectiveness of our method for designing new tools in several scenarios such as winding ropes, flipping a box and pushing peas onto a scoop in simulation.
arXiv Detail & Related papers (2022-11-04T00:57:36Z) - Don't Start From Scratch: Leveraging Prior Data to Automate Robotic
Reinforcement Learning [70.70104870417784]
Reinforcement learning (RL) algorithms hold the promise of enabling autonomous skill acquisition for robotic systems.
In practice, real-world robotic RL typically requires time consuming data collection and frequent human intervention to reset the environment.
In this work, we study how these challenges can be tackled by effective utilization of diverse offline datasets collected from previously seen tasks.
arXiv Detail & Related papers (2022-07-11T08:31:22Z) - Lifelong Robotic Reinforcement Learning by Retaining Experiences [61.79346922421323]
Many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times.
In this work, we study a practical sequential multi-task RL problem motivated by the practical constraints of physical robotic systems.
We derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set.
arXiv Detail & Related papers (2021-09-19T18:00:51Z) - Learning and Executing Re-usable Behaviour Trees from Natural Language
Instruction [1.4824891788575418]
Behaviour trees can be used in conjunction with natural language instruction to provide a robust and modular control architecture.
We show how behaviour trees generated using our approach can be generalised to novel scenarios.
We validate this work against an existing corpus of natural language instructions.
arXiv Detail & Related papers (2021-06-03T07:47:06Z) - Combining Planning and Learning of Behavior Trees for Robotic Assembly [0.9262157005505219]
We propose a method for generating Behavior Trees using a Genetic Programming algorithm.
We show that this type of high level learning of Behavior Trees can be transferred to a real system without further training.
arXiv Detail & Related papers (2021-03-16T13:11:39Z) - Error-Aware Policy Learning: Zero-Shot Generalization in Partially
Observable Dynamic Environments [18.8481771211768]
We introduce a novel approach to tackle such a sim-to-real problem by developing policies capable of adapting to new environments.
Key to our approach is an error-aware policy (EAP) that is explicitly made aware of the effect of unobservable factors during training.
We show that a trained EAP for a hip-torque assistive device can be transferred to different human agents with unseen biomechanical characteristics.
arXiv Detail & Related papers (2021-03-13T15:36:44Z) - Never Stop Learning: The Effectiveness of Fine-Tuning in Robotic
Reinforcement Learning [109.77163932886413]
We show how to adapt vision-based robotic manipulation policies to new variations by fine-tuning via off-policy reinforcement learning.
This adaptation uses less than 0.2% of the data necessary to learn the task from scratch.
We find that our approach of adapting pre-trained policies leads to substantial performance gains over the course of fine-tuning.
arXiv Detail & Related papers (2020-04-21T17:57:04Z) - SAPIEN: A SimulAted Part-based Interactive ENvironment [77.4739790629284]
SAPIEN is a realistic and physics-rich simulated environment that hosts a large-scale set for articulated objects.
We evaluate state-of-the-art vision algorithms for part detection and motion attribute recognition as well as demonstrate robotic interaction tasks.
arXiv Detail & Related papers (2020-03-19T00:11:34Z) - The Chef's Hat Simulation Environment for Reinforcement-Learning-Based
Agents [54.63186041942257]
We propose a virtual simulation environment that implements the Chef's Hat card game, designed to be used in Human-Robot Interaction scenarios.
This paper provides a controllable and reproducible scenario for reinforcement-learning algorithms.
arXiv Detail & Related papers (2020-03-12T15:52:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.