X-Sim: Cross-Embodiment Learning via Real-to-Sim-to-Real
- URL: http://arxiv.org/abs/2505.07096v3
- Date: Sun, 15 Jun 2025 04:59:43 GMT
- Title: X-Sim: Cross-Embodiment Learning via Real-to-Sim-to-Real
- Authors: Prithwish Dan, Kushal Kedia, Angela Chao, Edward Weiyi Duan, Maximus Adrian Pace, Wei-Chiu Ma, Sanjiban Choudhury,
- Abstract summary: X-Sim is a real-to-sim-to-real framework that uses object motion as a dense and transferable signal for learning robot policies.<n>X-Sim starts by reconstructing a simulation from an RGBD human video and tracking object trajectories to define object-centric rewards.<n>The learned policy is then distilled into an image-conditioned diffusion policy using synthetic rollouts rendered with varied viewpoints and lighting.
- Score: 13.644777529832291
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Human videos offer a scalable way to train robot manipulation policies, but lack the action labels needed by standard imitation learning algorithms. Existing cross-embodiment approaches try to map human motion to robot actions, but often fail when the embodiments differ significantly. We propose X-Sim, a real-to-sim-to-real framework that uses object motion as a dense and transferable signal for learning robot policies. X-Sim starts by reconstructing a photorealistic simulation from an RGBD human video and tracking object trajectories to define object-centric rewards. These rewards are used to train a reinforcement learning (RL) policy in simulation. The learned policy is then distilled into an image-conditioned diffusion policy using synthetic rollouts rendered with varied viewpoints and lighting. To transfer to the real world, X-Sim introduces an online domain adaptation technique that aligns real and simulated observations during deployment. Importantly, X-Sim does not require any robot teleoperation data. We evaluate it across 5 manipulation tasks in 2 environments and show that it: (1) improves task progress by 30% on average over hand-tracking and sim-to-real baselines, (2) matches behavior cloning with 10x less data collection time, and (3) generalizes to new camera viewpoints and test-time changes. Code and videos are available at https://portal-cornell.github.io/X-Sim/.
Related papers
- Crossing the Human-Robot Embodiment Gap with Sim-to-Real RL using One Human Demonstration [21.94699075066712]
Teaching robots dexterous manipulation skills often requires collecting hundreds of demonstrations using wearables or teleoperation.<n>We propose Human2Sim2Robot, a novel real-to-sim-to-real framework for training dexterous manipulation policies.
arXiv Detail & Related papers (2025-04-17T03:15:20Z) - Real-is-Sim: Bridging the Sim-to-Real Gap with a Dynamic Digital Twin [8.498460043101499]
We introduce real-is-sim, a new approach to integrating simulation into behavior cloning pipelines.<n>In contrast to real-only methods, which lack the ability to safely test policies before deployment, and sim-to-real methods, which require complex adaptation to cross the sim-to-real gap.<n>Our framework allows policies to seamlessly switch between running on real hardware and running in parallelized virtual environments.
arXiv Detail & Related papers (2025-04-04T17:05:56Z) - Video2Policy: Scaling up Manipulation Tasks in Simulation through Internet Videos [61.925837909969815]
We introduce Video2Policy, a novel framework that leverages internet RGB videos to reconstruct tasks based on everyday human behavior.<n>Our method can successfully train RL policies on such tasks, including complex and challenging tasks such as throwing.<n>We show that the generated simulation data can be scaled up for training a general policy, and it can be transferred back to the real robot in a Real2Sim2Real way.
arXiv Detail & Related papers (2025-02-14T03:22:03Z) - TRANSIC: Sim-to-Real Policy Transfer by Learning from Online Correction [25.36756787147331]
Learning in simulation and transferring the learned policy to the real world has the potential to enable generalist robots.
We propose a data-driven approach to enable successful sim-to-real transfer based on a human-in-the-loop framework.
We show that our approach can achieve successful sim-to-real transfer in complex and contact-rich manipulation tasks such as furniture assembly.
arXiv Detail & Related papers (2024-05-16T17:59:07Z) - Learning Interactive Real-World Simulators [96.5991333400566]
We explore the possibility of learning a universal simulator of real-world interaction through generative modeling.
We use the simulator to train both high-level vision-language policies and low-level reinforcement learning policies.
Video captioning models can benefit from training with simulated experience, opening up even wider applications.
arXiv Detail & Related papers (2023-10-09T19:42:22Z) - Sim2real Transfer Learning for Point Cloud Segmentation: An Industrial
Application Case on Autonomous Disassembly [55.41644538483948]
We present an industrial application case that uses sim2real transfer learning for point cloud data.
We provide insights on how to generate and process synthetic point cloud data.
A novel patch-based attention network is proposed additionally to tackle this problem.
arXiv Detail & Related papers (2023-01-12T14:00:37Z) - Sim-to-Real via Sim-to-Seg: End-to-end Off-road Autonomous Driving
Without Real Data [56.49494318285391]
We present Sim2Seg, a re-imagining of RCAN that crosses the visual reality gap for off-road autonomous driving.
This is done by learning to translate randomized simulation images into simulated segmentation and depth maps.
This allows us to train an end-to-end RL policy in simulation, and directly deploy in the real-world.
arXiv Detail & Related papers (2022-10-25T17:50:36Z) - DeXtreme: Transfer of Agile In-hand Manipulation from Simulation to
Reality [64.51295032956118]
We train a policy that can perform robust dexterous manipulation on an anthropomorphic robot hand.
Our work reaffirms the possibilities of sim-to-real transfer for dexterous manipulation in diverse kinds of hardware and simulator setups.
arXiv Detail & Related papers (2022-10-25T01:51:36Z) - Point Cloud Based Reinforcement Learning for Sim-to-Real and Partial
Observability in Visual Navigation [62.22058066456076]
Reinforcement Learning (RL) represents powerful tools to solve complex robotic tasks.
RL does not work directly in the real-world, which is known as the sim-to-real transfer problem.
We propose a method that learns on an observation space constructed by point clouds and environment randomization.
arXiv Detail & Related papers (2020-07-27T17:46:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.