Bi-Manual Manipulation and Attachment via Sim-to-Real Reinforcement
Learning
- URL: http://arxiv.org/abs/2203.08277v1
- Date: Tue, 15 Mar 2022 21:49:20 GMT
- Title: Bi-Manual Manipulation and Attachment via Sim-to-Real Reinforcement
Learning
- Authors: Satoshi Kataoka, Seyed Kamyar Seyed Ghasemipour, Daniel Freeman, Igor
Mordatch
- Abstract summary: We study how to solve bi-manual tasks using reinforcement learning trained in simulation.
We also discuss modifications to our simulated environment which lead to effective training of RL policies.
In this work, we design a Connect Task, where the aim is for two robot arms to pick up and attach two blocks with magnetic connection points.
- Score: 23.164743388342803
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most successes in robotic manipulation have been restricted to single-arm
robots, which limits the range of solvable tasks to pick-and-place, insertion,
and objects rearrangement. In contrast, dual and multi arm robot platforms
unlock a rich diversity of problems that can be tackled, such as laundry
folding and executing cooking skills. However, developing controllers for
multi-arm robots is complexified by a number of unique challenges, such as the
need for coordinated bimanual behaviors, and collision avoidance amongst
robots. Given these challenges, in this work we study how to solve bi-manual
tasks using reinforcement learning (RL) trained in simulation, such that the
resulting policies can be executed on real robotic platforms. Our RL approach
results in significant simplifications due to using real-time (4Hz) joint-space
control and directly passing unfiltered observations to neural networks
policies. We also extensively discuss modifications to our simulated
environment which lead to effective training of RL policies. In addition to
designing control algorithms, a key challenge is how to design fair evaluation
tasks for bi-manual robots that stress bimanual coordination, while removing
orthogonal complicating factors such as high-level perception. In this work, we
design a Connect Task, where the aim is for two robot arms to pick up and
attach two blocks with magnetic connection points. We validate our approach
with two xArm6 robots and 3D printed blocks with magnetic attachments, and find
that our system has 100% success rate at picking up blocks, and 65% success
rate at the Connect Task.
Related papers
- Large Language Models for Orchestrating Bimanual Robots [19.60907949776435]
We present LAnguage-model-based Bimanual ORchestration (LABOR) to analyze task configurations and devise coordination control policies.
We evaluate our method through simulated experiments involving two classes of long-horizon tasks using the NICOL humanoid robot.
arXiv Detail & Related papers (2024-04-02T15:08:35Z) - Nonprehensile Planar Manipulation through Reinforcement Learning with
Multimodal Categorical Exploration [8.343657309038285]
Reinforcement Learning is a powerful framework for developing such robot controllers.
We propose a multimodal exploration approach through categorical distributions, which enables us to train planar pushing RL policies.
We show that the learned policies are robust to external disturbances and observation noise, and scale to tasks with multiple pushers.
arXiv Detail & Related papers (2023-08-04T16:55:00Z) - Polybot: Training One Policy Across Robots While Embracing Variability [70.74462430582163]
We propose a set of key design decisions to train a single policy for deployment on multiple robotic platforms.
Our framework first aligns the observation and action spaces of our policy across embodiments via utilizing wrist cameras.
We evaluate our method on a dataset collected over 60 hours spanning 6 tasks and 3 robots with varying joint configurations and sizes.
arXiv Detail & Related papers (2023-07-07T17:21:16Z) - Stabilizing Contrastive RL: Techniques for Robotic Goal Reaching from
Offline Data [101.43350024175157]
Self-supervised learning has the potential to decrease the amount of human annotation and engineering effort required to learn control strategies.
Our work builds on prior work showing that the reinforcement learning (RL) itself can be cast as a self-supervised problem.
We demonstrate that a self-supervised RL algorithm based on contrastive learning can solve real-world, image-based robotic manipulation tasks.
arXiv Detail & Related papers (2023-06-06T01:36:56Z) - Bi-Manual Block Assembly via Sim-to-Real Reinforcement Learning [24.223788665601678]
Two xArm6 robots solve the U-shape assembly task with a success rate of above90% in simulation, and 50% on real hardware without any additional real-world fine-tuning.
Our results present a significant step forward for bi-arm capability on real hardware, and we hope our system can inspire future research on deep RL and Sim2Real transfer bi-manualpolicies.
arXiv Detail & Related papers (2023-03-27T01:25:24Z) - Leveraging Sequentiality in Reinforcement Learning from a Single
Demonstration [68.94506047556412]
We propose to leverage a sequential bias to learn control policies for complex robotic tasks using a single demonstration.
We show that DCIL-II can solve with unprecedented sample efficiency some challenging simulated tasks such as humanoid locomotion and stand-up.
arXiv Detail & Related papers (2022-11-09T10:28:40Z) - Lifelong Robotic Reinforcement Learning by Retaining Experiences [61.79346922421323]
Many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times.
In this work, we study a practical sequential multi-task RL problem motivated by the practical constraints of physical robotic systems.
We derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set.
arXiv Detail & Related papers (2021-09-19T18:00:51Z) - Disentangled Attention as Intrinsic Regularization for Bimanual
Multi-Object Manipulation [18.38312133753365]
We address the problem of solving complex bimanual robot manipulation tasks on multiple objects with sparse rewards.
We propose a novel technique called disentangled attention, which provides an intrinsic regularization for two robots to focus on separate sub-tasks and objects.
Experimental results show that our proposed intrinsic regularization successfully avoids domination and reduces conflicts for the policies.
arXiv Detail & Related papers (2021-06-10T16:53:04Z) - Large Scale Distributed Collaborative Unlabeled Motion Planning with
Graph Policy Gradients [122.85280150421175]
We present a learning method to solve the unlabelled motion problem with motion constraints and space constraints in 2D space for a large number of robots.
We employ a graph neural network (GNN) to parameterize policies for the robots.
arXiv Detail & Related papers (2021-02-11T21:57:43Z) - Learning Multi-Arm Manipulation Through Collaborative Teleoperation [63.35924708783826]
Imitation Learning (IL) is a powerful paradigm to teach robots to perform manipulation tasks.
Many real-world tasks require multiple arms, such as lifting a heavy object or assembling a desk.
We present Multi-Arm RoboTurk (MART), a multi-user data collection platform that allows multiple remote users to simultaneously teleoperate a set of robotic arms.
arXiv Detail & Related papers (2020-12-12T05:43:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.