Accelerating Robot Learning of Contact-Rich Manipulations: A Curriculum
Learning Study
- URL: http://arxiv.org/abs/2204.12844v2
- Date: Thu, 28 Apr 2022 06:57:39 GMT
- Title: Accelerating Robot Learning of Contact-Rich Manipulations: A Curriculum
Learning Study
- Authors: Cristian C. Beltran-Hernandez, Damien Petit, Ixchel G.
Ramirez-Alpizar, Kensuke Harada
- Abstract summary: This paper presents a study for accelerating robot learning of contact-rich manipulation tasks based on Curriculum Learning combined with Domain Randomization (DR)
We tackle complex industrial assembly tasks with position-controlled robots, such as insertion tasks.
Results also show that even when training only in simulation with toy tasks, our method can learn policies that can be transferred to the real-world robot.
- Score: 4.045850174820418
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The Reinforcement Learning (RL) paradigm has been an essential tool for
automating robotic tasks. Despite the advances in RL, it is still not widely
adopted in the industry due to the need for an expensive large amount of robot
interaction with its environment. Curriculum Learning (CL) has been proposed to
expedite learning. However, most research works have been only evaluated in
simulated environments, from video games to robotic toy tasks. This paper
presents a study for accelerating robot learning of contact-rich manipulation
tasks based on Curriculum Learning combined with Domain Randomization (DR). We
tackle complex industrial assembly tasks with position-controlled robots, such
as insertion tasks. We compare different curricula designs and sampling
approaches for DR. Based on this study, we propose a method that significantly
outperforms previous work, which uses DR only (No CL is used), with less than a
fifth of the training time (samples). Results also show that even when training
only in simulation with toy tasks, our method can learn policies that can be
transferred to the real-world robot. The learned policies achieved success
rates of up to 86\% on real-world complex industrial insertion tasks (with
tolerances of $\pm 0.01~mm$) not seen during the training.
Related papers
- Generalized Robot Learning Framework [10.03174544844559]
We present a low-cost robot learning framework that is both easily reproducible and transferable to various robots and environments.
We demonstrate that deployable imitation learning can be successfully applied even to industrial-grade robots.
arXiv Detail & Related papers (2024-09-18T15:34:31Z) - SERL: A Software Suite for Sample-Efficient Robotic Reinforcement
Learning [85.21378553454672]
We develop a library containing a sample efficient off-policy deep RL method, together with methods for computing rewards and resetting the environment.
We find that our implementation can achieve very efficient learning, acquiring policies for PCB board assembly, cable routing, and object relocation.
These policies achieve perfect or near-perfect success rates, extreme robustness even under perturbations, and exhibit emergent robustness recovery and correction behaviors.
arXiv Detail & Related papers (2024-01-29T10:01:10Z) - Robot Fine-Tuning Made Easy: Pre-Training Rewards and Policies for
Autonomous Real-World Reinforcement Learning [58.3994826169858]
We introduce RoboFuME, a reset-free fine-tuning system for robotic reinforcement learning.
Our insights are to utilize offline reinforcement learning techniques to ensure efficient online fine-tuning of a pre-trained policy.
Our method can incorporate data from an existing robot dataset and improve on a target task within as little as 3 hours of autonomous real-world experience.
arXiv Detail & Related papers (2023-10-23T17:50:08Z) - Learning Visual Tracking and Reaching with Deep Reinforcement Learning
on a UR10e Robotic Arm [2.2168889407389445]
Reinforcement learning algorithms provide the potential to enable robots to learn optimal solutions to complete new tasks without reprogramming them.
Current state-of-the-art in reinforcement learning relies on fast simulations and parallelization to achieve optimal performance.
This report outlines our initial research into the application of deep reinforcement learning on an industrial UR10e robot.
arXiv Detail & Related papers (2023-08-28T15:34:43Z) - Don't Start From Scratch: Leveraging Prior Data to Automate Robotic
Reinforcement Learning [70.70104870417784]
Reinforcement learning (RL) algorithms hold the promise of enabling autonomous skill acquisition for robotic systems.
In practice, real-world robotic RL typically requires time consuming data collection and frequent human intervention to reset the environment.
In this work, we study how these challenges can be tackled by effective utilization of diverse offline datasets collected from previously seen tasks.
arXiv Detail & Related papers (2022-07-11T08:31:22Z) - Accelerating Robotic Reinforcement Learning via Parameterized Action
Primitives [92.0321404272942]
Reinforcement learning can be used to build general-purpose robotic systems.
However, training RL agents to solve robotics tasks still remains challenging.
In this work, we manually specify a library of robot action primitives (RAPS), parameterized with arguments that are learned by an RL policy.
We find that our simple change to the action interface substantially improves both the learning efficiency and task performance.
arXiv Detail & Related papers (2021-10-28T17:59:30Z) - Lifelong Robotic Reinforcement Learning by Retaining Experiences [61.79346922421323]
Many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times.
In this work, we study a practical sequential multi-task RL problem motivated by the practical constraints of physical robotic systems.
We derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set.
arXiv Detail & Related papers (2021-09-19T18:00:51Z) - A Framework for Efficient Robotic Manipulation [79.10407063260473]
We show that a single robotic arm can learn sparse-reward manipulation policies from pixels.
We show that, given only 10 demonstrations, a single robotic arm can learn sparse-reward manipulation policies from pixels.
arXiv Detail & Related papers (2020-12-14T22:18:39Z) - SQUIRL: Robust and Efficient Learning from Video Demonstration of
Long-Horizon Robotic Manipulation Tasks [8.756012472587601]
Deep reinforcement learning (RL) can be used to learn complex manipulation tasks.
RL requires the robot to collect a large amount of real-world experience.
S SQUIRL performs a new but related long-horizon task robustly given only a single video demonstration.
arXiv Detail & Related papers (2020-03-10T20:26:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.