Self-Supervised Disentangled Representation Learning for Third-Person
Imitation Learning
- URL: http://arxiv.org/abs/2108.01069v1
- Date: Mon, 2 Aug 2021 17:55:03 GMT
- Title: Self-Supervised Disentangled Representation Learning for Third-Person
Imitation Learning
- Authors: Jinghuan Shang and Michael S. Ryoo
- Abstract summary: Third-person imitation learning (TPIL) is the concept of learning action policies by observing other agents in a third-person view.
In this paper, we present a TPIL approach for robot tasks with egomotion.
We propose our disentangled representation learning method to enable better state learning for TPIL.
- Score: 45.62939275764248
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans learn to imitate by observing others. However, robot imitation
learning generally requires expert demonstrations in the first-person view
(FPV). Collecting such FPV videos for every robot could be very expensive.
Third-person imitation learning (TPIL) is the concept of learning action
policies by observing other agents in a third-person view (TPV), similar to
what humans do. This ultimately allows utilizing human and robot demonstration
videos in TPV from many different data sources, for the policy learning. In
this paper, we present a TPIL approach for robot tasks with egomotion. Although
many robot tasks with ground/aerial mobility often involve actions with camera
egomotion, study on TPIL for such tasks has been limited. Here, FPV and TPV
observations are visually very different; FPV shows egomotion while the agent
appearance is only observable in TPV. To enable better state learning for TPIL,
we propose our disentangled representation learning method. We use a dual
auto-encoder structure plus representation permutation loss and
time-contrastive loss to ensure the state and viewpoint representations are
well disentangled. Our experiments show the effectiveness of our approach.
Related papers
- Learning an Actionable Discrete Diffusion Policy via Large-Scale Actionless Video Pre-Training [69.54948297520612]
Learning a generalist embodied agent poses challenges, primarily stemming from the scarcity of action-labeled robotic datasets.
We introduce a novel framework to tackle these challenges, which leverages a unified discrete diffusion to combine generative pre-training on human videos and policy fine-tuning on a small number of action-labeled robot videos.
Our method generates high-fidelity future videos for planning and enhances the fine-tuned policies compared to previous state-of-the-art approaches.
arXiv Detail & Related papers (2024-02-22T09:48:47Z) - Human-oriented Representation Learning for Robotic Manipulation [64.59499047836637]
Humans inherently possess generalizable visual representations that empower them to efficiently explore and interact with the environments in manipulation tasks.
We formalize this idea through the lens of human-oriented multi-task fine-tuning on top of pre-trained visual encoders.
Our Task Fusion Decoder consistently improves the representation of three state-of-the-art visual encoders for downstream manipulation policy-learning.
arXiv Detail & Related papers (2023-10-04T17:59:38Z) - Learning Video-Conditioned Policies for Unseen Manipulation Tasks [83.2240629060453]
Video-conditioned Policy learning maps human demonstrations of previously unseen tasks to robot manipulation skills.
We learn our policy to generate appropriate actions given current scene observations and a video of the target task.
We validate our approach on a set of challenging multi-task robot manipulation environments and outperform state of the art.
arXiv Detail & Related papers (2023-05-10T16:25:42Z) - Affordances from Human Videos as a Versatile Representation for Robotics [31.248842798600606]
We train a visual affordance model that estimates where and how in the scene a human is likely to interact.
The structure of these behavioral affordances directly enables the robot to perform many complex tasks.
We show the efficacy of our approach, which we call VRB, across 4 real world environments, over 10 different tasks, and 2 robotic platforms operating in the wild.
arXiv Detail & Related papers (2023-04-17T17:59:34Z) - Multi-View Masked World Models for Visual Robotic Manipulation [132.97980128530017]
We train a multi-view masked autoencoder which reconstructs pixels of randomly masked viewpoints.
We demonstrate the effectiveness of our method in a range of scenarios.
We also show that the multi-view masked autoencoder trained with multiple randomized viewpoints enables training a policy with strong viewpoint randomization.
arXiv Detail & Related papers (2023-02-05T15:37:02Z) - DexVIP: Learning Dexterous Grasping with Human Hand Pose Priors from
Video [86.49357517864937]
We propose DexVIP, an approach to learn dexterous robotic grasping from human-object interaction videos.
We do this by curating grasp images from human-object interaction videos and imposing a prior over the agent's hand pose.
We demonstrate that DexVIP compares favorably to existing approaches that lack a hand pose prior or rely on specialized tele-operation equipment.
arXiv Detail & Related papers (2022-02-01T00:45:57Z) - Look Closer: Bridging Egocentric and Third-Person Views with
Transformers for Robotic Manipulation [15.632809977544907]
Learning to solve precision-based manipulation tasks from visual feedback could drastically reduce the engineering efforts required by traditional robot systems.
We propose a setting for robotic manipulation in which the agent receives visual feedback from both a third-person camera and an egocentric camera mounted on the robot's wrist.
To fuse visual information from both cameras effectively, we additionally propose to use Transformers with a cross-view attention mechanism.
arXiv Detail & Related papers (2022-01-19T18:39:03Z) - DexMV: Imitation Learning for Dexterous Manipulation from Human Videos [11.470141313103465]
We propose a new platform and pipeline, DexMV, for imitation learning to bridge the gap between computer vision and robot learning.
We design a platform with: (i) a simulation system for complex dexterous manipulation tasks with a multi-finger robot hand and (ii) a computer vision system to record large-scale demonstrations of a human hand conducting the same tasks.
We show that the demonstrations can indeed improve robot learning by a large margin and solve the complex tasks which reinforcement learning alone cannot solve.
arXiv Detail & Related papers (2021-08-12T17:51:18Z) - Learning Object Manipulation Skills via Approximate State Estimation
from Real Videos [47.958512470724926]
Humans are adept at learning new tasks by watching a few instructional videos.
On the other hand, robots that learn new actions either require a lot of effort through trial and error, or use expert demonstrations that are challenging to obtain.
In this paper, we explore a method that facilitates learning object manipulation skills directly from videos.
arXiv Detail & Related papers (2020-11-13T08:53:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.