What Matters for Simulation to Online Reinforcement Learning on Real Robots
- URL: http://arxiv.org/abs/2602.20220v1
- Date: Mon, 23 Feb 2026 10:34:15 GMT
- Title: What Matters for Simulation to Online Reinforcement Learning on Real Robots
- Authors: Yarden As, Dhruva Tirumala, René Zurbrügg, Chenhao Li, Stelian Coros, Andreas Krause, Markus Wulfmeier,
- Abstract summary: We investigate what specific design choices enable successful online reinforcement learning on physical robots.<n>We systematically ablate algorithmic, systems, and experimental decisions that are typically left implicit in prior work.<n>We find that some widely used defaults can be harmful, while a set of robust, readily adopted design choices within standard RL practice yield stable learning across tasks and hardware.
- Score: 51.77095085120584
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We investigate what specific design choices enable successful online reinforcement learning (RL) on physical robots. Across 100 real-world training runs on three distinct robotic platforms, we systematically ablate algorithmic, systems, and experimental decisions that are typically left implicit in prior work. We find that some widely used defaults can be harmful, while a set of robust, readily adopted design choices within standard RL practice yield stable learning across tasks and hardware. These results provide the first large-sample empirical study of such design choices, enabling practitioners to deploy online RL with lower engineering effort.
Related papers
- RSL-RL: A Learning Library for Robotics Research [9.89623087508662]
RSL-RL is an open-source Reinforcement Learning library tailored to the specific needs of the robotics community.<n>Unlike broad general-purpose frameworks, its philosophy prioritizes a compact and easily modifiable, allowing researchers to adapt and extend algorithms with minimal overhead.
arXiv Detail & Related papers (2025-09-13T01:31:43Z) - Reinforcement Learning Within the Classical Robotics Stack: A Case Study in Robot Soccer [25.161615988222934]
We develop a novel architecture integrating model-free reinforcement learning (RL) within a classical robotics stack.<n>Our architecture led to victory in the 2024 RoboCup SPL Challenge Shield Division.
arXiv Detail & Related papers (2024-12-12T16:25:10Z) - Solving Multi-Goal Robotic Tasks with Decision Transformer [0.0]
We introduce a novel adaptation of the decision transformer architecture for offline multi-goal reinforcement learning in robotics.
Our approach integrates goal-specific information into the decision transformer, allowing it to handle complex tasks in an offline setting.
arXiv Detail & Related papers (2024-10-08T20:35:30Z) - SERL: A Software Suite for Sample-Efficient Robotic Reinforcement Learning [82.46975428739329]
We develop a library containing a sample efficient off-policy deep RL method, together with methods for computing rewards and resetting the environment.<n>We find that our implementation can achieve very efficient learning, acquiring policies for PCB board assembly, cable routing, and object relocation.<n>These policies achieve perfect or near-perfect success rates, extreme robustness even under perturbations, and exhibit emergent robustness recovery and correction behaviors.
arXiv Detail & Related papers (2024-01-29T10:01:10Z) - Stabilizing Contrastive RL: Techniques for Robotic Goal Reaching from Offline Data [96.5899286619008]
Self-supervised learning has the potential to decrease the amount of human annotation and engineering effort required to learn control strategies.<n>Our work builds on prior work showing that the reinforcement learning (RL) itself can be cast as a self-supervised problem.<n>We demonstrate that a self-supervised RL algorithm based on contrastive learning can solve real-world, image-based robotic manipulation tasks.
arXiv Detail & Related papers (2023-06-06T01:36:56Z) - Don't Start From Scratch: Leveraging Prior Data to Automate Robotic
Reinforcement Learning [70.70104870417784]
Reinforcement learning (RL) algorithms hold the promise of enabling autonomous skill acquisition for robotic systems.
In practice, real-world robotic RL typically requires time consuming data collection and frequent human intervention to reset the environment.
In this work, we study how these challenges can be tackled by effective utilization of diverse offline datasets collected from previously seen tasks.
arXiv Detail & Related papers (2022-07-11T08:31:22Z) - Constrained Reinforcement Learning for Robotics via Scenario-Based
Programming [64.07167316957533]
It is crucial to optimize the performance of DRL-based agents while providing guarantees about their behavior.
This paper presents a novel technique for incorporating domain-expert knowledge into a constrained DRL training loop.
Our experiments demonstrate that using our approach to leverage expert knowledge dramatically improves the safety and the performance of the agent.
arXiv Detail & Related papers (2022-06-20T07:19:38Z) - Adversarial Training is Not Ready for Robot Learning [55.493354071227174]
Adversarial training is an effective method to train deep learning models that are resilient to norm-bounded perturbations.
We show theoretically and experimentally that neural controllers obtained via adversarial training are subjected to three types of defects.
Our results suggest that adversarial training is not yet ready for robot learning.
arXiv Detail & Related papers (2021-03-15T07:51:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.