Learning Video-Conditioned Policies for Unseen Manipulation Tasks
- URL: http://arxiv.org/abs/2305.06289v1
- Date: Wed, 10 May 2023 16:25:42 GMT
- Title: Learning Video-Conditioned Policies for Unseen Manipulation Tasks
- Authors: Elliot Chane-Sane, Cordelia Schmid, Ivan Laptev
- Abstract summary: Video-conditioned Policy learning maps human demonstrations of previously unseen tasks to robot manipulation skills.
We learn our policy to generate appropriate actions given current scene observations and a video of the target task.
We validate our approach on a set of challenging multi-task robot manipulation environments and outperform state of the art.
- Score: 83.2240629060453
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The ability to specify robot commands by a non-expert user is critical for
building generalist agents capable of solving a large variety of tasks. One
convenient way to specify the intended robot goal is by a video of a person
demonstrating the target task. While prior work typically aims to imitate human
demonstrations performed in robot environments, here we focus on a more
realistic and challenging setup with demonstrations recorded in natural and
diverse human environments. We propose Video-conditioned Policy learning (ViP),
a data-driven approach that maps human demonstrations of previously unseen
tasks to robot manipulation skills. To this end, we learn our policy to
generate appropriate actions given current scene observations and a video of
the target task. To encourage generalization to new tasks, we avoid particular
tasks during training and learn our policy from unlabelled robot trajectories
and corresponding robot videos. Both robot and human videos in our framework
are represented by video embeddings pre-trained for human action recognition.
At test time we first translate human videos to robot videos in the common
video embedding space, and then use resulting embeddings to condition our
policies. Notably, our approach enables robot control by human demonstrations
in a zero-shot manner, i.e., without using robot trajectories paired with human
instructions during training. We validate our approach on a set of challenging
multi-task robot manipulation environments and outperform state of the art. Our
method also demonstrates excellent performance in a new challenging zero-shot
setup where no paired data is used during training.
Related papers
- Vid2Robot: End-to-end Video-conditioned Policy Learning with Cross-Attention Transformers [36.497624484863785]
We introduce Vid2Robot, an end-to-end video-conditioned policy that takes human videos demonstrating manipulation tasks as input and produces robot actions.
Our model is trained with a large dataset of prompt video-robot trajectory pairs to learn unified representations of human and robot actions from videos.
We evaluate Vid2Robot on real-world robots and observe over 20% improvement over BC-Z when using human prompt videos.
arXiv Detail & Related papers (2024-03-19T17:47:37Z) - Learning an Actionable Discrete Diffusion Policy via Large-Scale Actionless Video Pre-Training [69.54948297520612]
Learning a generalist embodied agent poses challenges, primarily stemming from the scarcity of action-labeled robotic datasets.
We introduce a novel framework to tackle these challenges, which leverages a unified discrete diffusion to combine generative pre-training on human videos and policy fine-tuning on a small number of action-labeled robot videos.
Our method generates high-fidelity future videos for planning and enhances the fine-tuned policies compared to previous state-of-the-art approaches.
arXiv Detail & Related papers (2024-02-22T09:48:47Z) - Learning to Act from Actionless Videos through Dense Correspondences [87.1243107115642]
We present an approach to construct a video-based robot policy capable of reliably executing diverse tasks across different robots and environments.
Our method leverages images as a task-agnostic representation, encoding both the state and action information, and text as a general representation for specifying robot goals.
We demonstrate the efficacy of our approach in learning policies on table-top manipulation and navigation tasks.
arXiv Detail & Related papers (2023-10-12T17:59:23Z) - Zero-Shot Robot Manipulation from Passive Human Videos [59.193076151832145]
We develop a framework for extracting agent-agnostic action representations from human videos.
Our framework is based on predicting plausible human hand trajectories.
We deploy the trained model zero-shot for physical robot manipulation tasks.
arXiv Detail & Related papers (2023-02-03T21:39:52Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - Learning Generalizable Robotic Reward Functions from "In-The-Wild" Human
Videos [59.58105314783289]
Domain-agnostic Video Discriminator (DVD) learns multitask reward functions by training a discriminator to classify whether two videos are performing the same task.
DVD can generalize by virtue of learning from a small amount of robot data with a broad dataset of human videos.
DVD can be combined with visual model predictive control to solve robotic manipulation tasks on a real WidowX200 robot in an unseen environment from a single human demo.
arXiv Detail & Related papers (2021-03-31T05:25:05Z) - Learning by Watching: Physical Imitation of Manipulation Skills from
Human Videos [28.712673809577076]
We present an approach for physical imitation from human videos for robot manipulation tasks.
We design a perception module that learns to translate human videos to the robot domain followed by unsupervised keypoint detection.
We evaluate the effectiveness of our approach on five robot manipulation tasks, including reaching, pushing, sliding, coffee making, and drawer closing.
arXiv Detail & Related papers (2021-01-18T18:50:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.