Twisting Lids Off with Two Hands
- URL: http://arxiv.org/abs/2403.02338v1
- Date: Mon, 4 Mar 2024 18:59:30 GMT
- Title: Twisting Lids Off with Two Hands
- Authors: Toru Lin, Zhao-Heng Yin, Haozhi Qi, Pieter Abbeel, Jitendra Malik
- Abstract summary: We show that policies trained in simulation using deep reinforcement learning can be effectively transferred to the real world.
Our findings serve as compelling evidence that deep reinforcement learning combined with sim-to-real transfer remains a promising approach for addressing manipulation problems of unprecedented complexity.
- Score: 88.20584085182857
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Manipulating objects with two multi-fingered hands has been a long-standing
challenge in robotics, attributed to the contact-rich nature of many
manipulation tasks and the complexity inherent in coordinating a
high-dimensional bimanual system. In this work, we consider the problem of
twisting lids of various bottle-like objects with two hands, and demonstrate
that policies trained in simulation using deep reinforcement learning can be
effectively transferred to the real world. With novel engineering insights into
physical modeling, real-time perception, and reward design, the policy
demonstrates generalization capabilities across a diverse set of unseen
objects, showcasing dynamic and dexterous behaviors. Our findings serve as
compelling evidence that deep reinforcement learning combined with sim-to-real
transfer remains a promising approach for addressing manipulation problems of
unprecedented complexity.
Related papers
- Multimodal Visual-Tactile Representation Learning through
Self-Supervised Contrastive Pre-Training [0.850206009406913]
MViTac is a novel methodology that leverages contrastive learning to integrate vision and touch sensations in a self-supervised fashion.
By availing both sensory inputs, MViTac leverages intra and inter-modality losses for learning representations, resulting in enhanced material property classification and more adept grasping prediction.
arXiv Detail & Related papers (2024-01-22T15:11:57Z) - Robotic Handling of Compliant Food Objects by Robust Learning from
Demonstration [79.76009817889397]
We propose a robust learning policy based on Learning from Demonstration (LfD) for robotic grasping of food compliant objects.
We present an LfD learning policy that automatically removes inconsistent demonstrations, and estimates the teacher's intended policy.
The proposed approach has a vast range of potential applications in the aforementioned industry sectors.
arXiv Detail & Related papers (2023-09-22T13:30:26Z) - RObotic MAnipulation Network (ROMAN) $\unicode{x2013}$ Hybrid
Hierarchical Learning for Solving Complex Sequential Tasks [70.69063219750952]
We present a Hybrid Hierarchical Learning framework, the Robotic Manipulation Network (ROMAN)
ROMAN achieves task versatility and robust failure recovery by integrating behavioural cloning, imitation learning, and reinforcement learning.
Experimental results show that by orchestrating and activating these specialised manipulation experts, ROMAN generates correct sequential activations for accomplishing long sequences of sophisticated manipulation tasks.
arXiv Detail & Related papers (2023-06-30T20:35:22Z) - Learning to Transfer In-Hand Manipulations Using a Greedy Shape
Curriculum [79.6027464700869]
We show that natural and robust in-hand manipulation of simple objects in a dynamic simulation can be learned from a high quality motion capture example.
We propose a simple greedy curriculum search algorithm that can successfully apply to a range of objects such as a teapot, bunny, bottle, train, and elephant.
arXiv Detail & Related papers (2023-03-14T17:08:19Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - Learning Category-Level Generalizable Object Manipulation Policy via
Generative Adversarial Self-Imitation Learning from Demonstrations [14.001076951265558]
Generalizable object manipulation skills are critical for intelligent robots to work in real-world complex scenes.
In this work, we tackle this category-level object manipulation policy learning problem via imitation learning in a task-agnostic manner.
We propose several general but critical techniques, including generative adversarial self-imitation learning from demonstrations, progressive growing of discriminator, and instance-balancing for expert buffer.
arXiv Detail & Related papers (2022-03-04T02:52:02Z) - Learning Multi-Arm Manipulation Through Collaborative Teleoperation [63.35924708783826]
Imitation Learning (IL) is a powerful paradigm to teach robots to perform manipulation tasks.
Many real-world tasks require multiple arms, such as lifting a heavy object or assembling a desk.
We present Multi-Arm RoboTurk (MART), a multi-user data collection platform that allows multiple remote users to simultaneously teleoperate a set of robotic arms.
arXiv Detail & Related papers (2020-12-12T05:43:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.