CyberDemo: Augmenting Simulated Human Demonstration for Real-World
Dexterous Manipulation
- URL: http://arxiv.org/abs/2402.14795v2
- Date: Fri, 1 Mar 2024 19:53:57 GMT
- Title: CyberDemo: Augmenting Simulated Human Demonstration for Real-World
Dexterous Manipulation
- Authors: Jun Wang, Yuzhe Qin, Kaiming Kuang, Yigit Korkmaz, Akhilan
Gurumoorthy, Hao Su, Xiaolong Wang
- Abstract summary: CyberDemo is a novel approach to robotic imitation learning that leverages simulated human demonstrations for real-world tasks.
Our research demonstrates the significant potential of simulated human demonstrations for real-world dexterous manipulation tasks.
- Score: 27.069114421842045
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce CyberDemo, a novel approach to robotic imitation learning that
leverages simulated human demonstrations for real-world tasks. By incorporating
extensive data augmentation in a simulated environment, CyberDemo outperforms
traditional in-domain real-world demonstrations when transferred to the real
world, handling diverse physical and visual conditions. Regardless of its
affordability and convenience in data collection, CyberDemo outperforms
baseline methods in terms of success rates across various tasks and exhibits
generalizability with previously unseen objects. For example, it can rotate
novel tetra-valve and penta-valve, despite human demonstrations only involving
tri-valves. Our research demonstrates the significant potential of simulated
human demonstrations for real-world dexterous manipulation tasks. More details
can be found at https://cyber-demo.github.io
Related papers
- DemoStart: Demonstration-led auto-curriculum applied to sim-to-real with multi-fingered robots [15.034811470942962]
We present DemoStart, a novel auto-curriculum reinforcement learning method capable of learning complex manipulation behaviors on an arm equipped with a three-fingered robotic hand.
Learning from simulation drastically reduces the development cycle of behavior generation, and domain randomization techniques are leveraged to achieve successful zero-shot sim-to-real transfer.
arXiv Detail & Related papers (2024-09-10T16:05:25Z) - RealDex: Towards Human-like Grasping for Robotic Dexterous Hand [64.47045863999061]
We introduce RealDex, a pioneering dataset capturing authentic dexterous hand grasping motions infused with human behavioral patterns.
RealDex holds immense promise in advancing humanoid robot for automated perception, cognition, and manipulation in real-world scenarios.
arXiv Detail & Related papers (2024-02-21T14:59:46Z) - MimicGen: A Data Generation System for Scalable Robot Learning using
Human Demonstrations [55.549956643032836]
MimicGen is a system for automatically synthesizing large-scale, rich datasets from only a small number of human demonstrations.
We show that robot agents can be effectively trained on this generated dataset by imitation learning to achieve strong performance in long-horizon and high-precision tasks.
arXiv Detail & Related papers (2023-10-26T17:17:31Z) - Learning Interactive Real-World Simulators [96.5991333400566]
We explore the possibility of learning a universal simulator of real-world interaction through generative modeling.
We use the simulator to train both high-level vision-language policies and low-level reinforcement learning policies.
Video captioning models can benefit from training with simulated experience, opening up even wider applications.
arXiv Detail & Related papers (2023-10-09T19:42:22Z) - Cross-Domain Transfer via Semantic Skill Imitation [49.83150463391275]
We propose an approach for semantic imitation, which uses demonstrations from a source domain, e.g. human videos, to accelerate reinforcement learning (RL)
Instead of imitating low-level actions like joint velocities, our approach imitates the sequence of demonstrated semantic skills like "opening the microwave" or "turning on the stove"
arXiv Detail & Related papers (2022-12-14T18:46:14Z) - DeXtreme: Transfer of Agile In-hand Manipulation from Simulation to
Reality [64.51295032956118]
We train a policy that can perform robust dexterous manipulation on an anthropomorphic robot hand.
Our work reaffirms the possibilities of sim-to-real transfer for dexterous manipulation in diverse kinds of hardware and simulator setups.
arXiv Detail & Related papers (2022-10-25T01:51:36Z) - Video2Skill: Adapting Events in Demonstration Videos to Skills in an
Environment using Cyclic MDP Homomorphisms [16.939129935919325]
Video2Skill (V2S) attempts to extend this capability to artificial agents by allowing a robot arm to learn from human cooking videos.
We first use sequence-to-sequence Auto-Encoder style architectures to learn a temporal latent space for events in long-horizon demonstrations.
We then transfer these representations to the robotic target domain, using a small amount of offline and unrelated interaction data.
arXiv Detail & Related papers (2021-09-08T17:59:01Z) - Visual Imitation Made Easy [102.36509665008732]
We present an alternate interface for imitation that simplifies the data collection process while allowing for easy transfer to robots.
We use commercially available reacher-grabber assistive tools both as a data collection device and as the robot's end-effector.
We experimentally evaluate on two challenging tasks: non-prehensile pushing and prehensile stacking, with 1000 diverse demonstrations for each task.
arXiv Detail & Related papers (2020-08-11T17:58:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.