Real-is-Sim: Bridging the Sim-to-Real Gap with a Dynamic Digital Twin
- URL: http://arxiv.org/abs/2504.03597v2
- Date: Wed, 02 Jul 2025 03:27:04 GMT
- Title: Real-is-Sim: Bridging the Sim-to-Real Gap with a Dynamic Digital Twin
- Authors: Jad Abou-Chakra, Lingfeng Sun, Krishan Rana, Brandon May, Karl Schmeckpeper, Niko Suenderhauf, Maria Vittoria Minniti, Laura Herlant,
- Abstract summary: We introduce real-is-sim, a new approach to integrating simulation into behavior cloning pipelines.<n>In contrast to real-only methods, which lack the ability to safely test policies before deployment, and sim-to-real methods, which require complex adaptation to cross the sim-to-real gap.<n>Our framework allows policies to seamlessly switch between running on real hardware and running in parallelized virtual environments.
- Score: 8.498460043101499
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce real-is-sim, a new approach to integrating simulation into behavior cloning pipelines. In contrast to real-only methods, which lack the ability to safely test policies before deployment, and sim-to-real methods, which require complex adaptation to cross the sim-to-real gap, our framework allows policies to seamlessly switch between running on real hardware and running in parallelized virtual environments. At the center of real-is-sim is a dynamic digital twin, powered by the Embodied Gaussian simulator, that synchronizes with the real world at 60Hz. This twin acts as a mediator between the behavior cloning policy and the real robot. Policies are trained using representations derived from simulator states and always act on the simulated robot, never the real one. During deployment, the real robot simply follows the simulated robot's joint states, and the simulation is continuously corrected with real world measurements. This setup, where the simulator drives all policy execution and maintains real-time synchronization with the physical world, shifts the responsibility of crossing the sim-to-real gap to the digital twin's synchronization mechanisms, instead of the policy itself. We demonstrate real-is-sim on a long-horizon manipulation task (PushT), showing that virtual evaluations are consistent with real-world results. We further show how real-world data can be augmented with virtual rollouts and compare to policies trained on different representations derived from the simulator state including object poses and rendered images from both static and robot-mounted cameras. Our results highlight the flexibility of the real-is-sim framework across training, evaluation, and deployment stages. Videos available at https://real-is-sim.github.io.
Related papers
- X-Sim: Cross-Embodiment Learning via Real-to-Sim-to-Real [13.644777529832291]
X-Sim is a real-to-sim-to-real framework that uses object motion as a dense and transferable signal for learning robot policies.<n>X-Sim starts by reconstructing a simulation from an RGBD human video and tracking object trajectories to define object-centric rewards.<n>The learned policy is then distilled into an image-conditioned diffusion policy using synthetic rollouts rendered with varied viewpoints and lighting.
arXiv Detail & Related papers (2025-05-11T19:04:00Z) - Empirical Analysis of Sim-and-Real Cotraining Of Diffusion Policies For Planar Pushing from Pixels [18.858676073824515]
Cotraining with demonstration data generated both in simulation and on real hardware has emerged as a powerful recipe to overcome the sim2real gap.<n>Performance gains scale with simulated data, but eventually plateau; real-world data increases this performance ceiling.<n>Perhaps surprisingly, having some visual domain gap actually helps the cotrained policy -- binary probes reveal that high-performing policies learn to distinguish simulated domains from real.
arXiv Detail & Related papers (2025-03-28T17:25:57Z) - Dynamics as Prompts: In-Context Learning for Sim-to-Real System Identifications [23.94013806312391]
We propose a novel approach that dynamically adjusts simulation environment parameters online using in-context learning.<n>We validate our approach across two tasks: object scooping and table air hockey.<n>Our approach delivers efficient and smooth system identification, advancing the deployment of robots in dynamic real-world scenarios.
arXiv Detail & Related papers (2024-10-27T07:13:38Z) - Overcoming the Sim-to-Real Gap: Leveraging Simulation to Learn to Explore for Real-World RL [25.991354823569033]
We show that in many regimes, while direct sim2real transfer may fail, we can utilize the simulator to learn a set of emphexploratory policies.
In particular, in the setting of low-rank MDPs, we show that coupling these exploratory policies with simple, practical approaches.
This is the first evidence that simulation transfer yields a provable gain in reinforcement learning in settings where direct sim2real transfer fails.
arXiv Detail & Related papers (2024-10-26T19:12:27Z) - EAGERx: Graph-Based Framework for Sim2real Robot Learning [9.145895178276822]
Sim2real, that is, the transfer of learned control policies from simulation to real world, is an area of growing interest in robotics.
We introduce EAGERx, a framework with a unified software pipeline for both real and simulated robot learning.
arXiv Detail & Related papers (2024-07-05T08:01:19Z) - TRANSIC: Sim-to-Real Policy Transfer by Learning from Online Correction [25.36756787147331]
Learning in simulation and transferring the learned policy to the real world has the potential to enable generalist robots.
We propose a data-driven approach to enable successful sim-to-real transfer based on a human-in-the-loop framework.
We show that our approach can achieve successful sim-to-real transfer in complex and contact-rich manipulation tasks such as furniture assembly.
arXiv Detail & Related papers (2024-05-16T17:59:07Z) - Evaluating Real-World Robot Manipulation Policies in Simulation [91.55267186958892]
Control and visual disparities between real and simulated environments are key challenges for reliable simulated evaluation.
We propose approaches for mitigating these gaps without needing to craft full-fidelity digital twins of real-world environments.
We create SIMPLER, a collection of simulated environments for manipulation policy evaluation on common real robot setups.
arXiv Detail & Related papers (2024-05-09T17:30:16Z) - Learning Interactive Real-World Simulators [96.5991333400566]
We explore the possibility of learning a universal simulator of real-world interaction through generative modeling.
We use the simulator to train both high-level vision-language policies and low-level reinforcement learning policies.
Video captioning models can benefit from training with simulated experience, opening up even wider applications.
arXiv Detail & Related papers (2023-10-09T19:42:22Z) - Robust Visual Sim-to-Real Transfer for Robotic Manipulation [79.66851068682779]
Learning visuomotor policies in simulation is much safer and cheaper than in the real world.
However, due to discrepancies between the simulated and real data, simulator-trained policies often fail when transferred to real robots.
One common approach to bridge the visual sim-to-real domain gap is domain randomization (DR)
arXiv Detail & Related papers (2023-07-28T05:47:24Z) - Sim-to-Real via Sim-to-Seg: End-to-end Off-road Autonomous Driving
Without Real Data [56.49494318285391]
We present Sim2Seg, a re-imagining of RCAN that crosses the visual reality gap for off-road autonomous driving.
This is done by learning to translate randomized simulation images into simulated segmentation and depth maps.
This allows us to train an end-to-end RL policy in simulation, and directly deploy in the real-world.
arXiv Detail & Related papers (2022-10-25T17:50:36Z) - DeXtreme: Transfer of Agile In-hand Manipulation from Simulation to
Reality [64.51295032956118]
We train a policy that can perform robust dexterous manipulation on an anthropomorphic robot hand.
Our work reaffirms the possibilities of sim-to-real transfer for dexterous manipulation in diverse kinds of hardware and simulator setups.
arXiv Detail & Related papers (2022-10-25T01:51:36Z) - Practical Imitation Learning in the Real World via Task Consistency Loss [18.827979446629296]
This paper introduces a self-supervised loss that encourages sim and real alignment both at the feature and action-prediction levels.
We achieve 80% success across ten seen and unseen scenes using only 16.2 hours of teleoperated demonstrations in sim and real.
arXiv Detail & Related papers (2022-02-03T21:43:06Z) - TrafficSim: Learning to Simulate Realistic Multi-Agent Behaviors [74.67698916175614]
We propose TrafficSim, a multi-agent behavior model for realistic traffic simulation.
In particular, we leverage an implicit latent variable model to parameterize a joint actor policy.
We show TrafficSim generates significantly more realistic and diverse traffic scenarios as compared to a diverse set of baselines.
arXiv Detail & Related papers (2021-01-17T00:29:30Z) - Reactive Long Horizon Task Execution via Visual Skill and Precondition
Models [59.76233967614774]
We describe an approach for sim-to-real training that can accomplish unseen robotic tasks using models learned in simulation to ground components of a simple task planner.
We show an increase in success rate from 91.6% to 98% in simulation and from 10% to 80% success rate in the real-world as compared with naive baselines.
arXiv Detail & Related papers (2020-11-17T15:24:01Z) - Point Cloud Based Reinforcement Learning for Sim-to-Real and Partial
Observability in Visual Navigation [62.22058066456076]
Reinforcement Learning (RL) represents powerful tools to solve complex robotic tasks.
RL does not work directly in the real-world, which is known as the sim-to-real transfer problem.
We propose a method that learns on an observation space constructed by point clouds and environment randomization.
arXiv Detail & Related papers (2020-07-27T17:46:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.