LoopSR: Looping Sim-and-Real for Lifelong Policy Adaptation of Legged Robots
- URL: http://arxiv.org/abs/2409.17992v2
- Date: Mon, 23 Jun 2025 06:59:08 GMT
- Title: LoopSR: Looping Sim-and-Real for Lifelong Policy Adaptation of Legged Robots
- Authors: Peilin Wu, Weiji Xie, Jiahang Cao, Hang Lai, Weinan Zhang,
- Abstract summary: We propose LoopSR, a lifelong policy adaptation framework that continuously refines RL policies in the post-deployment stage.<n>LoopSR employs a transformer-based encoder to map real-world trajectories into a latent space.<n>Autoencoder architecture and contrastive learning methods are adopted to enhance feature extraction of real-world dynamics.
- Score: 20.715834172041763
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reinforcement Learning (RL) has shown its remarkable and generalizable capability in legged locomotion through sim-to-real transfer. However, while adaptive methods like domain randomization are expected to enhance policy robustness across diverse environments, they potentially compromise the policy's performance in any specific environment, leading to suboptimal real-world deployment due to the No Free Lunch theorem. To address this, we propose LoopSR, a lifelong policy adaptation framework that continuously refines RL policies in the post-deployment stage. LoopSR employs a transformer-based encoder to map real-world trajectories into a latent space and reconstruct a digital twin of the real world for further improvement. Autoencoder architecture and contrastive learning methods are adopted to enhance feature extraction of real-world dynamics. Simulation parameters for continual training are derived by combining predicted values from the decoder with retrieved parameters from a pre-collected simulation trajectory dataset. By leveraging simulated continual training, LoopSR achieves superior data efficiency compared with strong baselines, yielding eminent performance with limited data in both sim-to-sim and sim-to-real experiments.
Related papers
- Offline Robotic World Model: Learning Robotic Policies without a Physics Simulator [50.191655141020505]
Reinforcement Learning (RL) has demonstrated impressive capabilities in robotic control but remains challenging due to high sample complexity, safety concerns, and the sim-to-real gap.<n>We introduce Offline Robotic World Model (RWM-O), a model-based approach that explicitly estimates uncertainty to improve policy learning without reliance on a physics simulator.
arXiv Detail & Related papers (2025-04-23T12:58:15Z) - Neural Fidelity Calibration for Informative Sim-to-Real Adaptation [10.117298045153564]
Deep reinforcement learning can seamlessly transfer agile locomotion and navigation skills from the simulator to real world.
However, bridging the sim-to-real gap with domain randomization or adversarial methods often demands expert physics knowledge to ensure policy robustness.
We propose Neural Fidelity (NFC), a novel framework that employs conditional score-based diffusion models to calibrate simulator physical coefficients and residual fidelity domains online during robot execution.
arXiv Detail & Related papers (2025-04-11T15:12:12Z) - From Abstraction to Reality: DARPA's Vision for Robust Sim-to-Real Autonomy [6.402441477393285]
TIAMAT aims to address rapid and robust transfer of autonomy technologies across dynamic and complex environments.
Current methods for simulation-to-reality (sim-to-real) transfer often rely on high-fidelity simulations.
TIAMAT's approaches aim to achieve abstract-to-real transfer for effective and rapid real-world adaptation.
arXiv Detail & Related papers (2025-03-14T02:06:10Z) - An Real-Sim-Real (RSR) Loop Framework for Generalizable Robotic Policy Transfer with Differentiable Simulation [13.15220962477623]
This paper introduces a novel Real-Sim-Real loop framework to address the gap between simulation and real-world conditions.
A key contribution of our work is the design of an informative cost function that encourages the collection of diverse and representative real-world data.
Our approach is implemented on the versatile Mujoco MJX platform, and our framework is compatible with a wide range of robotic systems.
arXiv Detail & Related papers (2025-03-13T07:27:05Z) - COSBO: Conservative Offline Simulation-Based Policy Optimization [7.696359453385686]
offline reinforcement learning allows training reinforcement learning models on data from live deployments.
In contrast, simulation environments attempting to replicate the live environment can be used instead of the live data.
We propose a method that combines an imperfect simulation environment with data from the target environment, to train an offline reinforcement learning policy.
arXiv Detail & Related papers (2024-09-22T12:20:55Z) - Autonomous Vehicle Controllers From End-to-End Differentiable Simulation [60.05963742334746]
We propose a differentiable simulator and design an analytic policy gradients (APG) approach to training AV controllers.
Our proposed framework brings the differentiable simulator into an end-to-end training loop, where gradients of environment dynamics serve as a useful prior to help the agent learn a more grounded policy.
We find significant improvements in performance and robustness to noise in the dynamics, as well as overall more intuitive human-like handling.
arXiv Detail & Related papers (2024-09-12T11:50:06Z) - Robust Visual Sim-to-Real Transfer for Robotic Manipulation [79.66851068682779]
Learning visuomotor policies in simulation is much safer and cheaper than in the real world.
However, due to discrepancies between the simulated and real data, simulator-trained policies often fail when transferred to real robots.
One common approach to bridge the visual sim-to-real domain gap is domain randomization (DR)
arXiv Detail & Related papers (2023-07-28T05:47:24Z) - AdaptSim: Task-Driven Simulation Adaptation for Sim-to-Real Transfer [10.173835871228718]
AdaptSim aims to optimize task performance in target (real) environments.
First, we meta-learn an adaptation policy in simulation using reinforcement learning.
We then perform iterative real-world adaptation by inferring new simulation parameter distributions for policy training.
arXiv Detail & Related papers (2023-02-09T19:10:57Z) - Provable Sim-to-real Transfer in Continuous Domain with Partial
Observations [39.18274543757048]
Sim-to-real transfer trains RL agents in the simulated environments and then deploys them in the real world.
We show that a popular robust adversarial training algorithm is capable of learning a policy from the simulated environment that is competitive to the optimal policy in the real-world environment.
arXiv Detail & Related papers (2022-10-27T16:37:52Z) - Cloud-Edge Training Architecture for Sim-to-Real Deep Reinforcement
Learning [0.8399688944263843]
Deep reinforcement learning (DRL) is a promising approach to solve complex control tasks by learning policies through interactions with the environment.
Sim-to-real approaches leverage simulations to pretrain DRL policies and then deploy them in the real world.
This work proposes a distributed cloud-edge architecture to train DRL agents in the real world in real-time.
arXiv Detail & Related papers (2022-03-04T10:27:01Z) - Nonprehensile Riemannian Motion Predictive Control [57.295751294224765]
We introduce a novel Real-to-Sim reward analysis technique to reliably imagine and predict the outcome of taking possible actions for a real robotic platform.
We produce a closed-loop controller to reactively push objects in a continuous action space.
We observe that RMPC is robust in cluttered as well as occluded environments and outperforms the baselines.
arXiv Detail & Related papers (2021-11-15T18:50:04Z) - Auto-Tuned Sim-to-Real Transfer [143.44593793640814]
Policies trained in simulation often fail when transferred to the real world.
Current approaches to tackle this problem, such as domain randomization, require prior knowledge and engineering.
We propose a method for automatically tuning simulator system parameters to match the real world.
arXiv Detail & Related papers (2021-04-15T17:59:55Z) - TrafficSim: Learning to Simulate Realistic Multi-Agent Behaviors [74.67698916175614]
We propose TrafficSim, a multi-agent behavior model for realistic traffic simulation.
In particular, we leverage an implicit latent variable model to parameterize a joint actor policy.
We show TrafficSim generates significantly more realistic and diverse traffic scenarios as compared to a diverse set of baselines.
arXiv Detail & Related papers (2021-01-17T00:29:30Z) - Predicting Sim-to-Real Transfer with Probabilistic Dynamics Models [3.7692466417039814]
We propose a method to predict the sim-to-real transfer performance of RL policies.
A probabilistic dynamics model is trained alongside the policy and evaluated on a fixed set of real-world trajectories.
arXiv Detail & Related papers (2020-09-27T15:06:54Z) - Point Cloud Based Reinforcement Learning for Sim-to-Real and Partial
Observability in Visual Navigation [62.22058066456076]
Reinforcement Learning (RL) represents powerful tools to solve complex robotic tasks.
RL does not work directly in the real-world, which is known as the sim-to-real transfer problem.
We propose a method that learns on an observation space constructed by point clouds and environment randomization.
arXiv Detail & Related papers (2020-07-27T17:46:59Z) - Sim-to-Real Transfer with Incremental Environment Complexity for
Reinforcement Learning of Depth-Based Robot Navigation [1.290382979353427]
Soft-Actor Critic (SAC) training strategy using incremental environment complexity is proposed to drastically reduce the need for additional training in the real world.
The application addressed is depth-based mapless navigation, where a mobile robot should reach a given waypoint in a cluttered environment with no prior mapping information.
arXiv Detail & Related papers (2020-04-30T10:47:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.