Cross-Domain Transfer via Semantic Skill Imitation
- URL: http://arxiv.org/abs/2212.07407v1
- Date: Wed, 14 Dec 2022 18:46:14 GMT
- Title: Cross-Domain Transfer via Semantic Skill Imitation
- Authors: Karl Pertsch, Ruta Desai, Vikash Kumar, Franziska Meier, Joseph J.
Lim, Dhruv Batra, Akshara Rai
- Abstract summary: We propose an approach for semantic imitation, which uses demonstrations from a source domain, e.g. human videos, to accelerate reinforcement learning (RL)
Instead of imitating low-level actions like joint velocities, our approach imitates the sequence of demonstrated semantic skills like "opening the microwave" or "turning on the stove"
- Score: 49.83150463391275
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose an approach for semantic imitation, which uses demonstrations from
a source domain, e.g. human videos, to accelerate reinforcement learning (RL)
in a different target domain, e.g. a robotic manipulator in a simulated
kitchen. Instead of imitating low-level actions like joint velocities, our
approach imitates the sequence of demonstrated semantic skills like "opening
the microwave" or "turning on the stove". This allows us to transfer
demonstrations across environments (e.g. real-world to simulated kitchen) and
agent embodiments (e.g. bimanual human demonstration to robotic arm). We
evaluate on three challenging cross-domain learning problems and match the
performance of demonstration-accelerated RL approaches that require in-domain
demonstrations. In a simulated kitchen environment, our approach learns
long-horizon robot manipulation tasks, using less than 3 minutes of human video
demonstrations from a real-world kitchen. This enables scaling robot learning
via the reuse of demonstrations, e.g. collected as human videos, for learning
in any number of target domains.
Related papers
- DemoStart: Demonstration-led auto-curriculum applied to sim-to-real with multi-fingered robots [15.034811470942962]
We present DemoStart, a novel auto-curriculum reinforcement learning method capable of learning complex manipulation behaviors on an arm equipped with a three-fingered robotic hand.
Learning from simulation drastically reduces the development cycle of behavior generation, and domain randomization techniques are leveraged to achieve successful zero-shot sim-to-real transfer.
arXiv Detail & Related papers (2024-09-10T16:05:25Z) - CyberDemo: Augmenting Simulated Human Demonstration for Real-World
Dexterous Manipulation [27.069114421842045]
CyberDemo is a novel approach to robotic imitation learning that leverages simulated human demonstrations for real-world tasks.
Our research demonstrates the significant potential of simulated human demonstrations for real-world dexterous manipulation tasks.
arXiv Detail & Related papers (2024-02-22T18:54:32Z) - Learning Video-Conditioned Policies for Unseen Manipulation Tasks [83.2240629060453]
Video-conditioned Policy learning maps human demonstrations of previously unseen tasks to robot manipulation skills.
We learn our policy to generate appropriate actions given current scene observations and a video of the target task.
We validate our approach on a set of challenging multi-task robot manipulation environments and outperform state of the art.
arXiv Detail & Related papers (2023-05-10T16:25:42Z) - Zero-Shot Robot Manipulation from Passive Human Videos [59.193076151832145]
We develop a framework for extracting agent-agnostic action representations from human videos.
Our framework is based on predicting plausible human hand trajectories.
We deploy the trained model zero-shot for physical robot manipulation tasks.
arXiv Detail & Related papers (2023-02-03T21:39:52Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - Bottom-Up Skill Discovery from Unsegmented Demonstrations for
Long-Horizon Robot Manipulation [55.31301153979621]
We tackle real-world long-horizon robot manipulation tasks through skill discovery.
We present a bottom-up approach to learning a library of reusable skills from unsegmented demonstrations.
Our method has shown superior performance over state-of-the-art imitation learning methods in multi-stage manipulation tasks.
arXiv Detail & Related papers (2021-09-28T16:18:54Z) - Video2Skill: Adapting Events in Demonstration Videos to Skills in an
Environment using Cyclic MDP Homomorphisms [16.939129935919325]
Video2Skill (V2S) attempts to extend this capability to artificial agents by allowing a robot arm to learn from human cooking videos.
We first use sequence-to-sequence Auto-Encoder style architectures to learn a temporal latent space for events in long-horizon demonstrations.
We then transfer these representations to the robotic target domain, using a small amount of offline and unrelated interaction data.
arXiv Detail & Related papers (2021-09-08T17:59:01Z) - DexMV: Imitation Learning for Dexterous Manipulation from Human Videos [11.470141313103465]
We propose a new platform and pipeline, DexMV, for imitation learning to bridge the gap between computer vision and robot learning.
We design a platform with: (i) a simulation system for complex dexterous manipulation tasks with a multi-finger robot hand and (ii) a computer vision system to record large-scale demonstrations of a human hand conducting the same tasks.
We show that the demonstrations can indeed improve robot learning by a large margin and solve the complex tasks which reinforcement learning alone cannot solve.
arXiv Detail & Related papers (2021-08-12T17:51:18Z) - Visual Imitation Made Easy [102.36509665008732]
We present an alternate interface for imitation that simplifies the data collection process while allowing for easy transfer to robots.
We use commercially available reacher-grabber assistive tools both as a data collection device and as the robot's end-effector.
We experimentally evaluate on two challenging tasks: non-prehensile pushing and prehensile stacking, with 1000 diverse demonstrations for each task.
arXiv Detail & Related papers (2020-08-11T17:58:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.