Zero-Shot Robot Manipulation from Passive Human Videos
- URL: http://arxiv.org/abs/2302.02011v1
- Date: Fri, 3 Feb 2023 21:39:52 GMT
- Title: Zero-Shot Robot Manipulation from Passive Human Videos
- Authors: Homanga Bharadhwaj, Abhinav Gupta, Shubham Tulsiani, Vikash Kumar
- Abstract summary: We develop a framework for extracting agent-agnostic action representations from human videos.
Our framework is based on predicting plausible human hand trajectories.
We deploy the trained model zero-shot for physical robot manipulation tasks.
- Score: 59.193076151832145
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Can we learn robot manipulation for everyday tasks, only by watching videos
of humans doing arbitrary tasks in different unstructured settings? Unlike
widely adopted strategies of learning task-specific behaviors or direct
imitation of a human video, we develop a a framework for extracting
agent-agnostic action representations from human videos, and then map it to the
agent's embodiment during deployment. Our framework is based on predicting
plausible human hand trajectories given an initial image of a scene. After
training this prediction model on a diverse set of human videos from the
internet, we deploy the trained model zero-shot for physical robot manipulation
tasks, after appropriate transformations to the robot's embodiment. This simple
strategy lets us solve coarse manipulation tasks like opening and closing
drawers, pushing, and tool use, without access to any in-domain robot
manipulation trajectories. Our real-world deployment results establish a strong
baseline for action prediction information that can be acquired from diverse
arbitrary videos of human activities, and be useful for zero-shot robotic
manipulation in unseen scenes.
Related papers
- Track2Act: Predicting Point Tracks from Internet Videos enables Generalizable Robot Manipulation [65.46610405509338]
We seek to learn a generalizable goal-conditioned policy that enables zero-shot robot manipulation.
Our framework,Track2Act predicts tracks of how points in an image should move in future time-steps based on a goal.
We show that this approach of combining scalably learned track prediction with a residual policy enables diverse generalizable robot manipulation.
arXiv Detail & Related papers (2024-05-02T17:56:55Z) - Towards Generalizable Zero-Shot Manipulation via Translating Human
Interaction Plans [58.27029676638521]
We show how passive human videos can serve as a rich source of data for learning such generalist robots.
We learn a human plan predictor that, given a current image of a scene and a goal image, predicts the future hand and object configurations.
We show that our learned system can perform over 16 manipulation skills that generalize to 40 objects.
arXiv Detail & Related papers (2023-12-01T18:54:12Z) - Learning Video-Conditioned Policies for Unseen Manipulation Tasks [83.2240629060453]
Video-conditioned Policy learning maps human demonstrations of previously unseen tasks to robot manipulation skills.
We learn our policy to generate appropriate actions given current scene observations and a video of the target task.
We validate our approach on a set of challenging multi-task robot manipulation environments and outperform state of the art.
arXiv Detail & Related papers (2023-05-10T16:25:42Z) - HERD: Continuous Human-to-Robot Evolution for Learning from Human
Demonstration [57.045140028275036]
We show that manipulation skills can be transferred from a human to a robot through the use of micro-evolutionary reinforcement learning.
We propose an algorithm for multi-dimensional evolution path searching that allows joint optimization of both the robot evolution path and the policy.
arXiv Detail & Related papers (2022-12-08T15:56:13Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - Learning by Watching: Physical Imitation of Manipulation Skills from
Human Videos [28.712673809577076]
We present an approach for physical imitation from human videos for robot manipulation tasks.
We design a perception module that learns to translate human videos to the robot domain followed by unsupervised keypoint detection.
We evaluate the effectiveness of our approach on five robot manipulation tasks, including reaching, pushing, sliding, coffee making, and drawer closing.
arXiv Detail & Related papers (2021-01-18T18:50:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.