Large-Scale Actionless Video Pre-Training via Discrete Diffusion for
Efficient Policy Learning
- URL: http://arxiv.org/abs/2402.14407v1
- Date: Thu, 22 Feb 2024 09:48:47 GMT
- Title: Large-Scale Actionless Video Pre-Training via Discrete Diffusion for
Efficient Policy Learning
- Authors: Haoran He, Chenjia Bai, Ling Pan, Weinan Zhang, Bin Zhao, Xuelong Li
- Abstract summary: We introduce a novel framework that combines generative pre-training on human videos and policy fine-tuning on action-labeled robot videos.
Our method generates high-fidelity future videos for planning and enhances the fine-tuned policies compared to previous state-of-the-art approaches.
- Score: 73.69573252516761
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning a generalist embodied agent capable of completing multiple tasks
poses challenges, primarily stemming from the scarcity of action-labeled
robotic datasets. In contrast, a vast amount of human videos exist, capturing
intricate tasks and interactions with the physical world. Promising prospects
arise for utilizing actionless human videos for pre-training and transferring
the knowledge to facilitate robot policy learning through limited robot
demonstrations. In this paper, we introduce a novel framework that leverages a
unified discrete diffusion to combine generative pre-training on human videos
and policy fine-tuning on a small number of action-labeled robot videos. We
start by compressing both human and robot videos into unified video tokens. In
the pre-training stage, we employ a discrete diffusion model with a
mask-and-replace diffusion strategy to predict future video tokens in the
latent space. In the fine-tuning stage, we harness the imagined future videos
to guide low-level action learning trained on a limited set of robot data.
Experiments demonstrate that our method generates high-fidelity future videos
for planning and enhances the fine-tuned policies compared to previous
state-of-the-art approaches with superior generalization ability. Our project
website is available at https://video-diff.github.io/.
Related papers
- Vidar: Embodied Video Diffusion Model for Generalist Bimanual Manipulation [21.424029706788883]
We introduce Video Diffusion for Action Reasoning (Vidar)<n>We pre-train the video diffusion model on 750K multi-view videos from three real-world bimanual robot platforms.<n>With only 20 minutes of human demonstrations on an unseen robot platform, Vidar generalizes to unseen tasks and backgrounds with strong semantic understanding.
arXiv Detail & Related papers (2025-07-17T08:31:55Z) - ZeroMimic: Distilling Robotic Manipulation Skills from Web Videos [15.809468471562537]
ZeroMimic generates image goal-conditioned skill policies for several common manipulation tasks.
We evaluate ZeroMimic's out-of-the-box performance in varied real-world and simulated kitchen settings.
To enable plug-and-play reuse of ZeroMimic policies on other task setups and robots, we release software and policy checkpoints.
arXiv Detail & Related papers (2025-03-31T09:27:00Z) - VidBot: Learning Generalizable 3D Actions from In-the-Wild 2D Human Videos for Zero-Shot Robotic Manipulation [53.63540587160549]
VidBot is a framework enabling zero-shot robotic manipulation using learned 3D affordance from in-the-wild monocular RGB-only human videos.
VidBot paves the way for leveraging everyday human videos to make robot learning more scalable.
arXiv Detail & Related papers (2025-03-10T10:04:58Z) - Vision-based Manipulation from Single Human Video with Open-World Object Graphs [58.23098483464538]
We present an object-centric approach to empower robots to learn vision-based manipulation skills from human videos.
We introduce ORION, an algorithm that tackles the problem by extracting an object-centric manipulation plan from a single RGB-D video.
arXiv Detail & Related papers (2024-05-30T17:56:54Z) - Vid2Robot: End-to-end Video-conditioned Policy Learning with Cross-Attention Transformers [36.497624484863785]
We introduce Vid2Robot, an end-to-end video-conditioned policy that takes human videos demonstrating manipulation tasks as input and produces robot actions.
Our model is trained with a large dataset of prompt video-robot trajectory pairs to learn unified representations of human and robot actions from videos.
We evaluate Vid2Robot on real-world robots and observe over 20% improvement over BC-Z when using human prompt videos.
arXiv Detail & Related papers (2024-03-19T17:47:37Z) - Learning Video-Conditioned Policies for Unseen Manipulation Tasks [83.2240629060453]
Video-conditioned Policy learning maps human demonstrations of previously unseen tasks to robot manipulation skills.
We learn our policy to generate appropriate actions given current scene observations and a video of the target task.
We validate our approach on a set of challenging multi-task robot manipulation environments and outperform state of the art.
arXiv Detail & Related papers (2023-05-10T16:25:42Z) - Affordances from Human Videos as a Versatile Representation for Robotics [31.248842798600606]
We train a visual affordance model that estimates where and how in the scene a human is likely to interact.
The structure of these behavioral affordances directly enables the robot to perform many complex tasks.
We show the efficacy of our approach, which we call VRB, across 4 real world environments, over 10 different tasks, and 2 robotic platforms operating in the wild.
arXiv Detail & Related papers (2023-04-17T17:59:34Z) - Zero-Shot Robot Manipulation from Passive Human Videos [59.193076151832145]
We develop a framework for extracting agent-agnostic action representations from human videos.
Our framework is based on predicting plausible human hand trajectories.
We deploy the trained model zero-shot for physical robot manipulation tasks.
arXiv Detail & Related papers (2023-02-03T21:39:52Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - Video2Skill: Adapting Events in Demonstration Videos to Skills in an
Environment using Cyclic MDP Homomorphisms [16.939129935919325]
Video2Skill (V2S) attempts to extend this capability to artificial agents by allowing a robot arm to learn from human cooking videos.
We first use sequence-to-sequence Auto-Encoder style architectures to learn a temporal latent space for events in long-horizon demonstrations.
We then transfer these representations to the robotic target domain, using a small amount of offline and unrelated interaction data.
arXiv Detail & Related papers (2021-09-08T17:59:01Z) - Learning Generalizable Robotic Reward Functions from "In-The-Wild" Human
Videos [59.58105314783289]
Domain-agnostic Video Discriminator (DVD) learns multitask reward functions by training a discriminator to classify whether two videos are performing the same task.
DVD can generalize by virtue of learning from a small amount of robot data with a broad dataset of human videos.
DVD can be combined with visual model predictive control to solve robotic manipulation tasks on a real WidowX200 robot in an unseen environment from a single human demo.
arXiv Detail & Related papers (2021-03-31T05:25:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.