Zero-Shot Transfer of Haptics-Based Object Insertion Policies
- URL: http://arxiv.org/abs/2301.12587v3
- Date: Thu, 8 Jun 2023 01:16:00 GMT
- Title: Zero-Shot Transfer of Haptics-Based Object Insertion Policies
- Authors: Samarth Brahmbhatt, Ankur Deka, Andrew Spielberg, Matthias M\"uller
- Abstract summary: Humans naturally exploit haptic feedback during contact-rich tasks like loading a dishwasher or stocking a bookshelf.
Current robotic systems focus on avoiding unexpected contact, often relying on strategically placed environment sensors.
We train a contact-exploiting manipulation policy in simulation for the contact-rich household task of loading plates into a slotted holder.
- Score: 11.711534127073492
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans naturally exploit haptic feedback during contact-rich tasks like
loading a dishwasher or stocking a bookshelf. Current robotic systems focus on
avoiding unexpected contact, often relying on strategically placed environment
sensors. Recently, contact-exploiting manipulation policies have been trained
in simulation and deployed on real robots. However, they require some form of
real-world adaptation to bridge the sim-to-real gap, which might not be
feasible in all scenarios. In this paper we train a contact-exploiting
manipulation policy in simulation for the contact-rich household task of
loading plates into a slotted holder, which transfers without any fine-tuning
to the real robot. We investigate various factors necessary for this zero-shot
transfer, like time delay modeling, memory representation, and domain
randomization. Our policy transfers with minimal sim-to-real gap and
significantly outperforms heuristic and learnt baselines. It also generalizes
to plates of different sizes and weights. Demonstration videos and code are
available at https://sites.google.com/view/compliant-object-insertion.
Related papers
- Flow as the Cross-Domain Manipulation Interface [73.15952395641136]
Im2Flow2Act enables robots to acquire real-world manipulation skills without the need of real-world robot training data.
Im2Flow2Act comprises two components: a flow generation network and a flow-conditioned policy.
We demonstrate Im2Flow2Act's capabilities in a variety of real-world tasks, including the manipulation of rigid, articulated, and deformable objects.
arXiv Detail & Related papers (2024-07-21T16:15:02Z) - Sim-to-Real Transfer of Deep Reinforcement Learning Agents for Online Coverage Path Planning [15.792914346054502]
We tackle the challenge of sim-to-real transfer of reinforcement learning (RL) agents for coverage path planning ( CPP)
We bridge the sim-to-real gap through a semi-virtual environment, including a real robot and real-time aspects, while utilizing a simulated sensor and obstacles.
We find that a high inference frequency allows first-order Markovian policies to transfer directly from simulation, while higher-order policies can be fine-tuned to further reduce the sim-to-real gap.
arXiv Detail & Related papers (2024-06-07T13:24:19Z) - Cross-Embodiment Robot Manipulation Skill Transfer using Latent Space Alignment [24.93621734941354]
This paper focuses on transferring control policies between robot manipulators with different morphology.
Key insight is to project the state and action spaces of the source and target robots to a common latent space representation.
We demonstrate sim-to-sim and sim-to-real manipulation policy transfer with source and target robots of different states, actions, and embodiments.
arXiv Detail & Related papers (2024-06-04T05:00:24Z) - TRANSIC: Sim-to-Real Policy Transfer by Learning from Online Correction [25.36756787147331]
Learning in simulation and transferring the learned policy to the real world has the potential to enable generalist robots.
We propose a data-driven approach to enable successful sim-to-real transfer based on a human-in-the-loop framework.
We show that our approach can achieve successful sim-to-real transfer in complex and contact-rich manipulation tasks such as furniture assembly.
arXiv Detail & Related papers (2024-05-16T17:59:07Z) - DeXtreme: Transfer of Agile In-hand Manipulation from Simulation to
Reality [64.51295032956118]
We train a policy that can perform robust dexterous manipulation on an anthropomorphic robot hand.
Our work reaffirms the possibilities of sim-to-real transfer for dexterous manipulation in diverse kinds of hardware and simulator setups.
arXiv Detail & Related papers (2022-10-25T01:51:36Z) - Nonprehensile Riemannian Motion Predictive Control [57.295751294224765]
We introduce a novel Real-to-Sim reward analysis technique to reliably imagine and predict the outcome of taking possible actions for a real robotic platform.
We produce a closed-loop controller to reactively push objects in a continuous action space.
We observe that RMPC is robust in cluttered as well as occluded environments and outperforms the baselines.
arXiv Detail & Related papers (2021-11-15T18:50:04Z) - COCOI: Contact-aware Online Context Inference for Generalizable
Non-planar Pushing [87.7257446869134]
General contact-rich manipulation problems are long-standing challenges in robotics.
Deep reinforcement learning has shown great potential in solving robot manipulation tasks.
We propose COCOI, a deep RL method that encodes a context embedding of dynamics properties online.
arXiv Detail & Related papers (2020-11-23T08:20:21Z) - Point Cloud Based Reinforcement Learning for Sim-to-Real and Partial
Observability in Visual Navigation [62.22058066456076]
Reinforcement Learning (RL) represents powerful tools to solve complex robotic tasks.
RL does not work directly in the real-world, which is known as the sim-to-real transfer problem.
We propose a method that learns on an observation space constructed by point clouds and environment randomization.
arXiv Detail & Related papers (2020-07-27T17:46:59Z) - Sim2Real for Peg-Hole Insertion with Eye-in-Hand Camera [58.720142291102135]
We use a simulator to learn the peg-hole insertion problem and then transfer the learned model to the real robot.
We show that the transferred policy, which only takes RGB-D and joint information (proprioception) can perform well on the real robot.
arXiv Detail & Related papers (2020-05-29T05:58:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.