Affordances from Human Videos as a Versatile Representation for Robotics
- URL: http://arxiv.org/abs/2304.08488v1
- Date: Mon, 17 Apr 2023 17:59:34 GMT
- Title: Affordances from Human Videos as a Versatile Representation for Robotics
- Authors: Shikhar Bahl, Russell Mendonca, Lili Chen, Unnat Jain, Deepak Pathak
- Abstract summary: We train a visual affordance model that estimates where and how in the scene a human is likely to interact.
The structure of these behavioral affordances directly enables the robot to perform many complex tasks.
We show the efficacy of our approach, which we call VRB, across 4 real world environments, over 10 different tasks, and 2 robotic platforms operating in the wild.
- Score: 31.248842798600606
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Building a robot that can understand and learn to interact by watching humans
has inspired several vision problems. However, despite some successful results
on static datasets, it remains unclear how current models can be used on a
robot directly. In this paper, we aim to bridge this gap by leveraging videos
of human interactions in an environment centric manner. Utilizing internet
videos of human behavior, we train a visual affordance model that estimates
where and how in the scene a human is likely to interact. The structure of
these behavioral affordances directly enables the robot to perform many complex
tasks. We show how to seamlessly integrate our affordance model with four robot
learning paradigms including offline imitation learning, exploration,
goal-conditioned learning, and action parameterization for reinforcement
learning. We show the efficacy of our approach, which we call VRB, across 4
real world environments, over 10 different tasks, and 2 robotic platforms
operating in the wild. Results, visualizations and videos at
https://robo-affordances.github.io/
Related papers
- Learning an Actionable Discrete Diffusion Policy via Large-Scale Actionless Video Pre-Training [69.54948297520612]
Learning a generalist embodied agent poses challenges, primarily stemming from the scarcity of action-labeled robotic datasets.
We introduce a novel framework to tackle these challenges, which leverages a unified discrete diffusion to combine generative pre-training on human videos and policy fine-tuning on a small number of action-labeled robot videos.
Our method generates high-fidelity future videos for planning and enhances the fine-tuned policies compared to previous state-of-the-art approaches.
arXiv Detail & Related papers (2024-02-22T09:48:47Z) - Structured World Models from Human Videos [45.08503470821952]
We tackle the problem of learning complex, general behaviors directly in the real world.
We propose an approach for robots to efficiently learn manipulation skills using only a handful of real-world interaction trajectories.
arXiv Detail & Related papers (2023-08-21T17:59:32Z) - Learning Video-Conditioned Policies for Unseen Manipulation Tasks [83.2240629060453]
Video-conditioned Policy learning maps human demonstrations of previously unseen tasks to robot manipulation skills.
We learn our policy to generate appropriate actions given current scene observations and a video of the target task.
We validate our approach on a set of challenging multi-task robot manipulation environments and outperform state of the art.
arXiv Detail & Related papers (2023-05-10T16:25:42Z) - Open-World Object Manipulation using Pre-trained Vision-Language Models [72.87306011500084]
For robots to follow instructions from people, they must be able to connect the rich semantic information in human vocabulary.
We develop a simple approach, which leverages a pre-trained vision-language model to extract object-identifying information.
In a variety of experiments on a real mobile manipulator, we find that MOO generalizes zero-shot to a wide range of novel object categories and environments.
arXiv Detail & Related papers (2023-03-02T01:55:10Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - Learning Generalizable Robotic Reward Functions from "In-The-Wild" Human
Videos [59.58105314783289]
Domain-agnostic Video Discriminator (DVD) learns multitask reward functions by training a discriminator to classify whether two videos are performing the same task.
DVD can generalize by virtue of learning from a small amount of robot data with a broad dataset of human videos.
DVD can be combined with visual model predictive control to solve robotic manipulation tasks on a real WidowX200 robot in an unseen environment from a single human demo.
arXiv Detail & Related papers (2021-03-31T05:25:05Z) - Learning Predictive Models From Observation and Interaction [137.77887825854768]
Learning predictive models from interaction with the world allows an agent, such as a robot, to learn about how the world works.
However, learning a model that captures the dynamics of complex skills represents a major challenge.
We propose a method to augment the training set with observational data of other agents, such as humans.
arXiv Detail & Related papers (2019-12-30T01:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.