Dexterous Imitation Made Easy: A Learning-Based Framework for Efficient
Dexterous Manipulation
- URL: http://arxiv.org/abs/2203.13251v1
- Date: Thu, 24 Mar 2022 17:58:54 GMT
- Title: Dexterous Imitation Made Easy: A Learning-Based Framework for Efficient
Dexterous Manipulation
- Authors: Sridhar Pandian Arunachalam, Sneha Silwal, Ben Evans, Lerrel Pinto
- Abstract summary: 'Dexterous Made Easy' (DIME) is a new imitation learning framework for dexterous manipulation.
DIME only requires a single RGB camera to observe a human operator and teleoperate our robotic hand.
On both simulation and real robot benchmarks we demonstrate that DIME can be used to solve complex, in-hand manipulation tasks.
- Score: 13.135013586592585
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optimizing behaviors for dexterous manipulation has been a longstanding
challenge in robotics, with a variety of methods from model-based control to
model-free reinforcement learning having been previously explored in
literature. Perhaps one of the most powerful techniques to learn complex
manipulation strategies is imitation learning. However, collecting and learning
from demonstrations in dexterous manipulation is quite challenging. The
complex, high-dimensional action-space involved with multi-finger control often
leads to poor sample efficiency of learning-based methods. In this work, we
propose 'Dexterous Imitation Made Easy' (DIME) a new imitation learning
framework for dexterous manipulation. DIME only requires a single RGB camera to
observe a human operator and teleoperate our robotic hand. Once demonstrations
are collected, DIME employs standard imitation learning methods to train
dexterous manipulation policies. On both simulation and real robot benchmarks
we demonstrate that DIME can be used to solve complex, in-hand manipulation
tasks such as 'flipping', 'spinning', and 'rotating' objects with the Allegro
hand. Our framework along with pre-collected demonstrations is publicly
available at https://nyu-robot-learning.github.io/dime.
Related papers
- Offline Imitation Learning Through Graph Search and Retrieval [57.57306578140857]
Imitation learning is a powerful machine learning algorithm for a robot to acquire manipulation skills.
We propose GSR, a simple yet effective algorithm that learns from suboptimal demonstrations through Graph Search and Retrieval.
GSR can achieve a 10% to 30% higher success rate and over 30% higher proficiency compared to baselines.
arXiv Detail & Related papers (2024-07-22T06:12:21Z) - SWBT: Similarity Weighted Behavior Transformer with the Imperfect
Demonstration for Robotic Manipulation [32.78083518963342]
We propose a novel framework named Similarity Weighted Behavior Transformer (SWBT)
SWBT effectively learn from both expert and imperfect demonstrations without interaction with environments.
We are the first to attempt to integrate imperfect demonstrations into the offline imitation learning setting for robot manipulation tasks.
arXiv Detail & Related papers (2024-01-17T04:15:56Z) - XSkill: Cross Embodiment Skill Discovery [41.624343257852146]
XSkill is an imitation learning framework that discovers a cross-embodiment representation called skill prototypes purely from unlabeled human and robot manipulation videos.
Our experiments in simulation and real-world environments show that the discovered skill prototypes facilitate skill transfer and composition for unseen tasks.
arXiv Detail & Related papers (2023-07-19T12:51:28Z) - DexPBT: Scaling up Dexterous Manipulation for Hand-Arm Systems with
Population Based Training [10.808149303943948]
We learn dexterous object manipulation using simulated one- or two-armed robots equipped with multi-fingered hand end-effectors.
We introduce a decentralized Population-Based Training (PBT) algorithm that allows us to massively amplify the exploration capabilities of deep reinforcement learning.
arXiv Detail & Related papers (2023-05-20T07:25:27Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - A Differentiable Recipe for Learning Visual Non-Prehensile Planar
Manipulation [63.1610540170754]
We focus on the problem of visual non-prehensile planar manipulation.
We propose a novel architecture that combines video decoding neural models with priors from contact mechanics.
We find that our modular and fully differentiable architecture performs better than learning-only methods on unseen objects and motions.
arXiv Detail & Related papers (2021-11-09T18:39:45Z) - DexMV: Imitation Learning for Dexterous Manipulation from Human Videos [11.470141313103465]
We propose a new platform and pipeline, DexMV, for imitation learning to bridge the gap between computer vision and robot learning.
We design a platform with: (i) a simulation system for complex dexterous manipulation tasks with a multi-finger robot hand and (ii) a computer vision system to record large-scale demonstrations of a human hand conducting the same tasks.
We show that the demonstrations can indeed improve robot learning by a large margin and solve the complex tasks which reinforcement learning alone cannot solve.
arXiv Detail & Related papers (2021-08-12T17:51:18Z) - A Framework for Efficient Robotic Manipulation [79.10407063260473]
We show that a single robotic arm can learn sparse-reward manipulation policies from pixels.
We show that, given only 10 demonstrations, a single robotic arm can learn sparse-reward manipulation policies from pixels.
arXiv Detail & Related papers (2020-12-14T22:18:39Z) - Learning Object Manipulation Skills via Approximate State Estimation
from Real Videos [47.958512470724926]
Humans are adept at learning new tasks by watching a few instructional videos.
On the other hand, robots that learn new actions either require a lot of effort through trial and error, or use expert demonstrations that are challenging to obtain.
In this paper, we explore a method that facilitates learning object manipulation skills directly from videos.
arXiv Detail & Related papers (2020-11-13T08:53:47Z) - Visual Imitation Made Easy [102.36509665008732]
We present an alternate interface for imitation that simplifies the data collection process while allowing for easy transfer to robots.
We use commercially available reacher-grabber assistive tools both as a data collection device and as the robot's end-effector.
We experimentally evaluate on two challenging tasks: non-prehensile pushing and prehensile stacking, with 1000 diverse demonstrations for each task.
arXiv Detail & Related papers (2020-08-11T17:58:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.