DexMV: Imitation Learning for Dexterous Manipulation from Human Videos
- URL: http://arxiv.org/abs/2108.05877v1
- Date: Thu, 12 Aug 2021 17:51:18 GMT
- Title: DexMV: Imitation Learning for Dexterous Manipulation from Human Videos
- Authors: Yuzhe Qin, Yueh-Hua Wu, Shaowei Liu, Hanwen Jiang, Ruihan Yang, Yang
Fu, Xiaolong Wang
- Abstract summary: We propose a new platform and pipeline, DexMV, for imitation learning to bridge the gap between computer vision and robot learning.
We design a platform with: (i) a simulation system for complex dexterous manipulation tasks with a multi-finger robot hand and (ii) a computer vision system to record large-scale demonstrations of a human hand conducting the same tasks.
We show that the demonstrations can indeed improve robot learning by a large margin and solve the complex tasks which reinforcement learning alone cannot solve.
- Score: 11.470141313103465
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While we have made significant progress on understanding hand-object
interactions in computer vision, it is still very challenging for robots to
perform complex dexterous manipulation. In this paper, we propose a new
platform and pipeline, DexMV (Dex Manipulation from Videos), for imitation
learning to bridge the gap between computer vision and robot learning. We
design a platform with: (i) a simulation system for complex dexterous
manipulation tasks with a multi-finger robot hand and (ii) a computer vision
system to record large-scale demonstrations of a human hand conducting the same
tasks. In our new pipeline, we extract 3D hand and object poses from the
videos, and convert them to robot demonstrations via motion retargeting. We
then apply and compare multiple imitation learning algorithms with the
demonstrations. We show that the demonstrations can indeed improve robot
learning by a large margin and solve the complex tasks which reinforcement
learning alone cannot solve. Project page with video:
https://yzqin.github.io/dexmv/
Related papers
- Learning an Actionable Discrete Diffusion Policy via Large-Scale Actionless Video Pre-Training [69.54948297520612]
Learning a generalist embodied agent poses challenges, primarily stemming from the scarcity of action-labeled robotic datasets.
We introduce a novel framework to tackle these challenges, which leverages a unified discrete diffusion to combine generative pre-training on human videos and policy fine-tuning on a small number of action-labeled robot videos.
Our method generates high-fidelity future videos for planning and enhances the fine-tuned policies compared to previous state-of-the-art approaches.
arXiv Detail & Related papers (2024-02-22T09:48:47Z) - Giving Robots a Hand: Learning Generalizable Manipulation with
Eye-in-Hand Human Video Demonstrations [66.47064743686953]
Eye-in-hand cameras have shown promise in enabling greater sample efficiency and generalization in vision-based robotic manipulation.
Videos of humans performing tasks, on the other hand, are much cheaper to collect since they eliminate the need for expertise in robotic teleoperation.
In this work, we augment narrow robotic imitation datasets with broad unlabeled human video demonstrations to greatly enhance the generalization of eye-in-hand visuomotor policies.
arXiv Detail & Related papers (2023-07-12T07:04:53Z) - Cross-Domain Transfer via Semantic Skill Imitation [49.83150463391275]
We propose an approach for semantic imitation, which uses demonstrations from a source domain, e.g. human videos, to accelerate reinforcement learning (RL)
Instead of imitating low-level actions like joint velocities, our approach imitates the sequence of demonstrated semantic skills like "opening the microwave" or "turning on the stove"
arXiv Detail & Related papers (2022-12-14T18:46:14Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - Signs of Language: Embodied Sign Language Fingerspelling Acquisition
from Demonstrations for Human-Robot Interaction [1.0166477175169308]
We propose an approach for learning dexterous motor imitation from video examples without additional information.
We first build a URDF model of a robotic hand with a single actuator for each joint.
We then leverage pre-trained deep vision models to extract the 3D pose of the hand from RGB videos.
arXiv Detail & Related papers (2022-09-12T10:42:26Z) - From One Hand to Multiple Hands: Imitation Learning for Dexterous
Manipulation from Single-Camera Teleoperation [26.738893736520364]
We introduce a novel single-camera teleoperation system to collect the 3D demonstrations efficiently with only an iPad and a computer.
We construct a customized robot hand for each user in the physical simulator, which is a manipulator resembling the same kinematics structure and shape of the operator's hand.
With imitation learning using our data, we show large improvement over baselines with multiple complex manipulation tasks.
arXiv Detail & Related papers (2022-04-26T17:59:51Z) - Dexterous Imitation Made Easy: A Learning-Based Framework for Efficient
Dexterous Manipulation [13.135013586592585]
'Dexterous Made Easy' (DIME) is a new imitation learning framework for dexterous manipulation.
DIME only requires a single RGB camera to observe a human operator and teleoperate our robotic hand.
On both simulation and real robot benchmarks we demonstrate that DIME can be used to solve complex, in-hand manipulation tasks.
arXiv Detail & Related papers (2022-03-24T17:58:54Z) - A Differentiable Recipe for Learning Visual Non-Prehensile Planar
Manipulation [63.1610540170754]
We focus on the problem of visual non-prehensile planar manipulation.
We propose a novel architecture that combines video decoding neural models with priors from contact mechanics.
We find that our modular and fully differentiable architecture performs better than learning-only methods on unseen objects and motions.
arXiv Detail & Related papers (2021-11-09T18:39:45Z) - Visual Imitation Made Easy [102.36509665008732]
We present an alternate interface for imitation that simplifies the data collection process while allowing for easy transfer to robots.
We use commercially available reacher-grabber assistive tools both as a data collection device and as the robot's end-effector.
We experimentally evaluate on two challenging tasks: non-prehensile pushing and prehensile stacking, with 1000 diverse demonstrations for each task.
arXiv Detail & Related papers (2020-08-11T17:58:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.