Bi-Manual Block Assembly via Sim-to-Real Reinforcement Learning
- URL: http://arxiv.org/abs/2303.14870v1
- Date: Mon, 27 Mar 2023 01:25:24 GMT
- Title: Bi-Manual Block Assembly via Sim-to-Real Reinforcement Learning
- Authors: Satoshi Kataoka, Youngseog Chung, Seyed Kamyar Seyed Ghasemipour,
Pannag Sanketi, Shixiang Shane Gu, Igor Mordatch
- Abstract summary: Two xArm6 robots solve the U-shape assembly task with a success rate of above90% in simulation, and 50% on real hardware without any additional real-world fine-tuning.
Our results present a significant step forward for bi-arm capability on real hardware, and we hope our system can inspire future research on deep RL and Sim2Real transfer bi-manualpolicies.
- Score: 24.223788665601678
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most successes in robotic manipulation have been restricted to single-arm
gripper robots, whose low dexterity limits the range of solvable tasks to
pick-and-place, inser-tion, and object rearrangement. More complex tasks such
as assembly require dual and multi-arm platforms, but entail a suite of unique
challenges such as bi-arm coordination and collision avoidance, robust
grasping, and long-horizon planning. In this work we investigate the
feasibility of training deep reinforcement learning (RL) policies in simulation
and transferring them to the real world (Sim2Real) as a generic methodology for
obtaining performant controllers for real-world bi-manual robotic manipulation
tasks. As a testbed for bi-manual manipulation, we develop the U-Shape Magnetic
BlockAssembly Task, wherein two robots with parallel grippers must connect 3
magnetic blocks to form a U-shape. Without manually-designed controller nor
human demonstrations, we demonstrate that with careful Sim2Real considerations,
our policies trained with RL in simulation enable two xArm6 robots to solve the
U-shape assembly task with a success rate of above90% in simulation, and 50% on
real hardware without any additional real-world fine-tuning. Through careful
ablations,we highlight how each component of the system is critical for such
simple and successful policy learning and transfer,including task
specification, learning algorithm, direct joint-space control, behavior
constraints, perception and actuation noises, action delays and action
interpolation. Our results present a significant step forward for bi-arm
capability on real hardware, and we hope our system can inspire future research
on deep RL and Sim2Real transfer of bi-manualpolicies, drastically scaling up
the capability of real-world robot manipulators.
Related papers
- Generalize by Touching: Tactile Ensemble Skill Transfer for Robotic Furniture Assembly [24.161856591498825]
Tactile Ensemble Skill Transfer (TEST) is a pioneering offline reinforcement learning (RL) approach that incorporates tactile feedback in the control loop.
TEST's core design is to learn a skill transition model for high-level planning, along with a set of adaptive intra-skill goal-reaching policies.
Results indicate that TEST can achieve a success rate of 90% and is over 4 times more efficient than the generalization policy.
arXiv Detail & Related papers (2024-04-26T20:27:10Z) - Twisting Lids Off with Two Hands [82.21668778600414]
We show how policies trained in simulation can be effectively and efficiently transferred to the real world.
Specifically, we consider the problem of twisting lids of various bottle-like objects with two hands.
This is the first sim-to-real RL system that enables such capabilities on bimanual multi-fingered hands.
arXiv Detail & Related papers (2024-03-04T18:59:30Z) - Nonprehensile Planar Manipulation through Reinforcement Learning with
Multimodal Categorical Exploration [8.343657309038285]
Reinforcement Learning is a powerful framework for developing such robot controllers.
We propose a multimodal exploration approach through categorical distributions, which enables us to train planar pushing RL policies.
We show that the learned policies are robust to external disturbances and observation noise, and scale to tasks with multiple pushers.
arXiv Detail & Related papers (2023-08-04T16:55:00Z) - Leveraging Sequentiality in Reinforcement Learning from a Single
Demonstration [68.94506047556412]
We propose to leverage a sequential bias to learn control policies for complex robotic tasks using a single demonstration.
We show that DCIL-II can solve with unprecedented sample efficiency some challenging simulated tasks such as humanoid locomotion and stand-up.
arXiv Detail & Related papers (2022-11-09T10:28:40Z) - DeXtreme: Transfer of Agile In-hand Manipulation from Simulation to
Reality [64.51295032956118]
We train a policy that can perform robust dexterous manipulation on an anthropomorphic robot hand.
Our work reaffirms the possibilities of sim-to-real transfer for dexterous manipulation in diverse kinds of hardware and simulator setups.
arXiv Detail & Related papers (2022-10-25T01:51:36Z) - Active Predicting Coding: Brain-Inspired Reinforcement Learning for
Sparse Reward Robotic Control Problems [79.07468367923619]
We propose a backpropagation-free approach to robotic control through the neuro-cognitive computational framework of neural generative coding (NGC)
We design an agent built completely from powerful predictive coding/processing circuits that facilitate dynamic, online learning from sparse rewards.
We show that our proposed ActPC agent performs well in the face of sparse (extrinsic) reward signals and is competitive with or outperforms several powerful backprop-based RL approaches.
arXiv Detail & Related papers (2022-09-19T16:49:32Z) - Bi-Manual Manipulation and Attachment via Sim-to-Real Reinforcement
Learning [23.164743388342803]
We study how to solve bi-manual tasks using reinforcement learning trained in simulation.
We also discuss modifications to our simulated environment which lead to effective training of RL policies.
In this work, we design a Connect Task, where the aim is for two robot arms to pick up and attach two blocks with magnetic connection points.
arXiv Detail & Related papers (2022-03-15T21:49:20Z) - Learning to Centralize Dual-Arm Assembly [0.6091702876917281]
This work focuses on assembly with humanoid robots by providing a framework for dual-arm peg-in-hole manipulation.
We reduce modeling effort to a minimum by using sparse rewards only.
We demonstrate the effectiveness of the framework on dual-arm peg-in-hole and analyze sample efficiency and success rates for different action spaces.
arXiv Detail & Related papers (2021-10-08T09:59:12Z) - Learning Multi-Arm Manipulation Through Collaborative Teleoperation [63.35924708783826]
Imitation Learning (IL) is a powerful paradigm to teach robots to perform manipulation tasks.
Many real-world tasks require multiple arms, such as lifting a heavy object or assembling a desk.
We present Multi-Arm RoboTurk (MART), a multi-user data collection platform that allows multiple remote users to simultaneously teleoperate a set of robotic arms.
arXiv Detail & Related papers (2020-12-12T05:43:43Z) - ReLMoGen: Leveraging Motion Generation in Reinforcement Learning for
Mobile Manipulation [99.2543521972137]
ReLMoGen is a framework that combines a learned policy to predict subgoals and a motion generator to plan and execute the motion needed to reach these subgoals.
Our method is benchmarked on a diverse set of seven robotics tasks in photo-realistic simulation environments.
ReLMoGen shows outstanding transferability between different motion generators at test time, indicating a great potential to transfer to real robots.
arXiv Detail & Related papers (2020-08-18T08:05:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.