Real Robot Challenge 2022: Learning Dexterous Manipulation from Offline
Data in the Real World
- URL: http://arxiv.org/abs/2308.07741v3
- Date: Fri, 24 Nov 2023 14:53:50 GMT
- Title: Real Robot Challenge 2022: Learning Dexterous Manipulation from Offline
Data in the Real World
- Authors: Nico G\"urtler, Felix Widmaier, Cansu Sancaktar, Sebastian Blaes,
Pavel Kolev, Stefan Bauer, Manuel W\"uthrich, Markus Wulfmeier, Martin
Riedmiller, Arthur Allshire, Qiang Wang, Robert McCarthy, Hangyeol Kim,
Jongchan Baek, Wookyong Kwon, Shanliang Qian, Yasunori Toshimitsu, Mike Yan
Michelis, Amirhossein Kazemipour, Arman Raayatsanati, Hehui Zheng, Barnabas
Gavin Cangan, Bernhard Sch\"olkopf, Georg Martius
- Abstract summary: The Real Robot Challenge 2022 served as a bridge between the reinforcement learning and robotics communities.
We asked the participants to learn two dexterous manipulation tasks involving pushing, grasping, and in-hand orientation from provided real-robot datasets.
An extensive software documentation and an initial stage based on a simulation of the real set-up made the competition particularly accessible.
- Score: 38.54892412474853
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Experimentation on real robots is demanding in terms of time and costs. For
this reason, a large part of the reinforcement learning (RL) community uses
simulators to develop and benchmark algorithms. However, insights gained in
simulation do not necessarily translate to real robots, in particular for tasks
involving complex interactions with the environment. The Real Robot Challenge
2022 therefore served as a bridge between the RL and robotics communities by
allowing participants to experiment remotely with a real robot - as easily as
in simulation.
In the last years, offline reinforcement learning has matured into a
promising paradigm for learning from pre-collected datasets, alleviating the
reliance on expensive online interactions. We therefore asked the participants
to learn two dexterous manipulation tasks involving pushing, grasping, and
in-hand orientation from provided real-robot datasets. An extensive software
documentation and an initial stage based on a simulation of the real set-up
made the competition particularly accessible. By giving each team plenty of
access budget to evaluate their offline-learned policies on a cluster of seven
identical real TriFinger platforms, we organized an exciting competition for
machine learners and roboticists alike.
In this work we state the rules of the competition, present the methods used
by the winning teams and compare their results with a benchmark of
state-of-the-art offline RL algorithms on the challenge datasets.
Related papers
- A Retrospective on the Robot Air Hockey Challenge: Benchmarking Robust, Reliable, and Safe Learning Techniques for Real-world Robotics [53.33976793493801]
We organized the Robot Air Hockey Challenge at the NeurIPS 2023 conference.
We focus on practical challenges in robotics, such as the sim-to-real gap, low-level control issues, safety problems, real-time requirements, and the limited availability of real-world data.
Results show that solutions combining learning-based approaches with prior knowledge outperform those relying solely on data when real-world deployment is challenging.
arXiv Detail & Related papers (2024-11-08T17:20:47Z) - An Architecture for Unattended Containerized (Deep) Reinforcement
Learning with Webots [0.0]
Reinforcement learning with agents in a 3D world could still face challenges.
Knowledge required to use a simulation software as well as the utilization of a standalone simulation software in unattended training pipelines.
arXiv Detail & Related papers (2024-02-06T12:08:01Z) - Train Offline, Test Online: A Real Robot Learning Benchmark [113.19664479709587]
Train Offline, Test Online (TOTO) provides remote users with access to shared robotic hardware for evaluating methods on common tasks.
We present initial results on TOTO comparing five pretrained visual representations and four offline policy learning baselines, remotely contributed by five institutions.
The real promise of TOTO, however, lies in the future: we release the benchmark for additional submissions from any user, enabling easy, direct comparison to several methods without the need to obtain hardware or collect data.
arXiv Detail & Related papers (2023-06-01T17:42:08Z) - Learning Human-to-Robot Handovers from Point Clouds [63.18127198174958]
We propose the first framework to learn control policies for vision-based human-to-robot handovers.
We show significant performance gains over baselines on a simulation benchmark, sim-to-sim transfer and sim-to-real transfer.
arXiv Detail & Related papers (2023-03-30T17:58:36Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Accelerating Interactive Human-like Manipulation Learning with GPU-based
Simulation and High-quality Demonstrations [25.393382192511716]
We present an immersive virtual reality teleoperation interface designed for interactive human-like manipulation on contact rich tasks.
We demonstrate the complementary strengths of massively parallel RL and imitation learning, yielding robust and natural behaviors.
arXiv Detail & Related papers (2022-12-05T09:37:27Z) - Robot Learning from Randomized Simulations: A Review [59.992761565399185]
Deep learning has caused a paradigm shift in robotics research, favoring methods that require large amounts of data.
State-of-the-art approaches learn in simulation where data generation is fast as well as inexpensive.
We focus on a technique named 'domain randomization' which is a method for learning from randomized simulations.
arXiv Detail & Related papers (2021-11-01T13:55:41Z) - robo-gym -- An Open Source Toolkit for Distributed Deep Reinforcement
Learning on Real and Simulated Robots [0.5161531917413708]
We propose an open source toolkit: robo-gym to increase the use of Deep Reinforcement Learning with real robots.
We demonstrate a unified setup for simulation and real environments which enables a seamless transfer from training in simulation to application on the robot.
We showcase the capabilities and the effectiveness of the framework with two real world applications featuring industrial robots.
arXiv Detail & Related papers (2020-07-06T13:51:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.