robo-gym -- An Open Source Toolkit for Distributed Deep Reinforcement
Learning on Real and Simulated Robots
- URL: http://arxiv.org/abs/2007.02753v2
- Date: Mon, 16 Nov 2020 17:00:34 GMT
- Title: robo-gym -- An Open Source Toolkit for Distributed Deep Reinforcement
Learning on Real and Simulated Robots
- Authors: Matteo Lucchi, Friedemann Zindler, Stephan M\"uhlbacher-Karrer, Horst
Pichler
- Abstract summary: We propose an open source toolkit: robo-gym to increase the use of Deep Reinforcement Learning with real robots.
We demonstrate a unified setup for simulation and real environments which enables a seamless transfer from training in simulation to application on the robot.
We showcase the capabilities and the effectiveness of the framework with two real world applications featuring industrial robots.
- Score: 0.5161531917413708
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Applying Deep Reinforcement Learning (DRL) to complex tasks in the field of
robotics has proven to be very successful in the recent years. However, most of
the publications focus either on applying it to a task in simulation or to a
task in a real world setup. Although there are great examples of combining the
two worlds with the help of transfer learning, it often requires a lot of
additional work and fine-tuning to make the setup work effectively. In order to
increase the use of DRL with real robots and reduce the gap between simulation
and real world robotics, we propose an open source toolkit: robo-gym. We
demonstrate a unified setup for simulation and real environments which enables
a seamless transfer from training in simulation to application on the robot. We
showcase the capabilities and the effectiveness of the framework with two real
world applications featuring industrial robots: a mobile robot and a robot arm.
The distributed capabilities of the framework enable several advantages like
using distributed algorithms, separating the workload of simulation and
training on different physical machines as well as enabling the future
opportunity to train in simulation and real world at the same time. Finally we
offer an overview and comparison of robo-gym with other frequently used
state-of-the-art DRL frameworks.
Related papers
- Physical Simulation for Multi-agent Multi-machine Tending [11.017120167486448]
Reinforcement learning (RL) offers a promising solution where robots can learn through interaction with the environment.
We leveraged a simplistic robotic system to work with RL with "real" data without having to deploy large expensive robots in a manufacturing setting.
arXiv Detail & Related papers (2024-10-11T17:57:44Z) - RoboCasa: Large-Scale Simulation of Everyday Tasks for Generalist Robots [25.650235551519952]
We present RoboCasa, a large-scale simulation framework for training generalist robots in everyday environments.
We provide thousands of 3D assets across over 150 object categories and dozens of interactable furniture and appliances.
Our experiments show a clear scaling trend in using synthetically generated robot data for large-scale imitation learning.
arXiv Detail & Related papers (2024-06-04T17:41:31Z) - RoboScript: Code Generation for Free-Form Manipulation Tasks across Real
and Simulation [77.41969287400977]
This paper presents textbfRobotScript, a platform for a deployable robot manipulation pipeline powered by code generation.
We also present a benchmark for a code generation benchmark for robot manipulation tasks in free-form natural language.
We demonstrate the adaptability of our code generation framework across multiple robot embodiments, including the Franka and UR5 robot arms.
arXiv Detail & Related papers (2024-02-22T15:12:00Z) - RoboGen: Towards Unleashing Infinite Data for Automated Robot Learning via Generative Simulation [68.70755196744533]
RoboGen is a generative robotic agent that automatically learns diverse robotic skills at scale via generative simulation.
Our work attempts to extract the extensive and versatile knowledge embedded in large-scale models and transfer them to the field of robotics.
arXiv Detail & Related papers (2023-11-02T17:59:21Z) - Dynamic Handover: Throw and Catch with Bimanual Hands [30.206469112964033]
We design a system with two multi-finger hands attached to robot arms to solve this problem.
We train our system using Multi-Agent Reinforcement Learning in simulation and perform Sim2Real transfer to deploy on the real robots.
To overcome the Sim2Real gap, we provide multiple novel algorithm designs including learning a trajectory prediction model for the object.
arXiv Detail & Related papers (2023-09-11T17:49:25Z) - GenLoco: Generalized Locomotion Controllers for Quadrupedal Robots [87.32145104894754]
We introduce a framework for training generalized locomotion (GenLoco) controllers for quadrupedal robots.
Our framework synthesizes general-purpose locomotion controllers that can be deployed on a large variety of quadrupedal robots.
We show that our models acquire more general control strategies that can be directly transferred to novel simulated and real-world robots.
arXiv Detail & Related papers (2022-09-12T15:14:32Z) - REvolveR: Continuous Evolutionary Models for Robot-to-robot Policy
Transfer [57.045140028275036]
We consider the problem of transferring a policy across two different robots with significantly different parameters such as kinematics and morphology.
Existing approaches that train a new policy by matching the action or state transition distribution, including imitation learning methods, fail due to optimal action and/or state distribution being mismatched in different robots.
We propose a novel method named $REvolveR$ of using continuous evolutionary models for robotic policy transfer implemented in a physics simulator.
arXiv Detail & Related papers (2022-02-10T18:50:25Z) - SAGCI-System: Towards Sample-Efficient, Generalizable, Compositional,
and Incremental Robot Learning [41.19148076789516]
We introduce a systematic learning framework called SAGCI-system towards achieving the above four requirements.
Our system first takes the raw point clouds gathered by the camera mounted on the robot's wrist as the inputs and produces initial modeling of the surrounding environment represented as a URDF.
The robot then utilizes the interactive perception to interact with the environments to online verify and modify the URDF.
arXiv Detail & Related papers (2021-11-29T16:53:49Z) - Lifelong Robotic Reinforcement Learning by Retaining Experiences [61.79346922421323]
Many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times.
In this work, we study a practical sequential multi-task RL problem motivated by the practical constraints of physical robotic systems.
We derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set.
arXiv Detail & Related papers (2021-09-19T18:00:51Z) - SAPIEN: A SimulAted Part-based Interactive ENvironment [77.4739790629284]
SAPIEN is a realistic and physics-rich simulated environment that hosts a large-scale set for articulated objects.
We evaluate state-of-the-art vision algorithms for part detection and motion attribute recognition as well as demonstrate robotic interaction tasks.
arXiv Detail & Related papers (2020-03-19T00:11:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.