Giving Robots a Hand: Learning Generalizable Manipulation with
Eye-in-Hand Human Video Demonstrations
- URL: http://arxiv.org/abs/2307.05959v1
- Date: Wed, 12 Jul 2023 07:04:53 GMT
- Title: Giving Robots a Hand: Learning Generalizable Manipulation with
Eye-in-Hand Human Video Demonstrations
- Authors: Moo Jin Kim, Jiajun Wu, Chelsea Finn
- Abstract summary: Eye-in-hand cameras have shown promise in enabling greater sample efficiency and generalization in vision-based robotic manipulation.
Videos of humans performing tasks, on the other hand, are much cheaper to collect since they eliminate the need for expertise in robotic teleoperation.
In this work, we augment narrow robotic imitation datasets with broad unlabeled human video demonstrations to greatly enhance the generalization of eye-in-hand visuomotor policies.
- Score: 66.47064743686953
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Eye-in-hand cameras have shown promise in enabling greater sample efficiency
and generalization in vision-based robotic manipulation. However, for robotic
imitation, it is still expensive to have a human teleoperator collect large
amounts of expert demonstrations with a real robot. Videos of humans performing
tasks, on the other hand, are much cheaper to collect since they eliminate the
need for expertise in robotic teleoperation and can be quickly captured in a
wide range of scenarios. Therefore, human video demonstrations are a promising
data source for learning generalizable robotic manipulation policies at scale.
In this work, we augment narrow robotic imitation datasets with broad unlabeled
human video demonstrations to greatly enhance the generalization of eye-in-hand
visuomotor policies. Although a clear visual domain gap exists between human
and robot data, our framework does not need to employ any explicit domain
adaptation method, as we leverage the partial observability of eye-in-hand
cameras as well as a simple fixed image masking scheme. On a suite of eight
real-world tasks involving both 3-DoF and 6-DoF robot arm control, our method
improves the success rates of eye-in-hand manipulation policies by 58%
(absolute) on average, enabling robots to generalize to both new environment
configurations and new tasks that are unseen in the robot demonstration data.
See video results at https://giving-robots-a-hand.github.io/ .
Related papers
- Learning Video-Conditioned Policies for Unseen Manipulation Tasks [83.2240629060453]
Video-conditioned Policy learning maps human demonstrations of previously unseen tasks to robot manipulation skills.
We learn our policy to generate appropriate actions given current scene observations and a video of the target task.
We validate our approach on a set of challenging multi-task robot manipulation environments and outperform state of the art.
arXiv Detail & Related papers (2023-05-10T16:25:42Z) - Zero-Shot Robot Manipulation from Passive Human Videos [59.193076151832145]
We develop a framework for extracting agent-agnostic action representations from human videos.
Our framework is based on predicting plausible human hand trajectories.
We deploy the trained model zero-shot for physical robot manipulation tasks.
arXiv Detail & Related papers (2023-02-03T21:39:52Z) - VideoDex: Learning Dexterity from Internet Videos [27.49510986378025]
We propose leveraging the next best thing as real-world experience: internet videos of humans using their hands.
Visual priors, such as visual features, are often learned from videos, but more information from videos can be utilized as a stronger prior.
We build a learning algorithm, VideoDex, that leverages visual, action, and physical priors from human video datasets to guide robot behavior.
arXiv Detail & Related papers (2022-12-08T18:59:59Z) - HERD: Continuous Human-to-Robot Evolution for Learning from Human
Demonstration [57.045140028275036]
We show that manipulation skills can be transferred from a human to a robot through the use of micro-evolutionary reinforcement learning.
We propose an algorithm for multi-dimensional evolution path searching that allows joint optimization of both the robot evolution path and the policy.
arXiv Detail & Related papers (2022-12-08T15:56:13Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - From One Hand to Multiple Hands: Imitation Learning for Dexterous
Manipulation from Single-Camera Teleoperation [26.738893736520364]
We introduce a novel single-camera teleoperation system to collect the 3D demonstrations efficiently with only an iPad and a computer.
We construct a customized robot hand for each user in the physical simulator, which is a manipulator resembling the same kinematics structure and shape of the operator's hand.
With imitation learning using our data, we show large improvement over baselines with multiple complex manipulation tasks.
arXiv Detail & Related papers (2022-04-26T17:59:51Z) - Robotic Telekinesis: Learning a Robotic Hand Imitator by Watching Humans
on Youtube [24.530131506065164]
We build a system that enables any human to control a robot hand and arm, simply by demonstrating motions with their own hand.
The robot observes the human operator via a single RGB camera and imitates their actions in real-time.
We leverage this data to train a system that understands human hands and retargets a human video stream into a robot hand-arm trajectory that is smooth, swift, safe, and semantically similar to the guiding demonstration.
arXiv Detail & Related papers (2022-02-21T18:59:59Z) - Learning Generalizable Robotic Reward Functions from "In-The-Wild" Human
Videos [59.58105314783289]
Domain-agnostic Video Discriminator (DVD) learns multitask reward functions by training a discriminator to classify whether two videos are performing the same task.
DVD can generalize by virtue of learning from a small amount of robot data with a broad dataset of human videos.
DVD can be combined with visual model predictive control to solve robotic manipulation tasks on a real WidowX200 robot in an unseen environment from a single human demo.
arXiv Detail & Related papers (2021-03-31T05:25:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.