RoboRAN: A Unified Robotics Framework for Reinforcement Learning-Based Autonomous Navigation
- URL: http://arxiv.org/abs/2505.14526v2
- Date: Wed, 05 Nov 2025 17:12:59 GMT
- Title: RoboRAN: A Unified Robotics Framework for Reinforcement Learning-Based Autonomous Navigation
- Authors: Matteo El-Hariry, Antoine Richard, Ricard M. Castan, Luis F. W. Batista, Matthieu Geist, Cedric Pradalier, Miguel Olivares-Mendez,
- Abstract summary: We present a multi-domain framework for training, evaluating and deploying RL-based navigation policies across diverse robotic platforms and operational environments.<n>Our work presents four key contributions: (1) a scalable and modular framework, facilitating seamless robot-task interchangeability and reproducible training pipelines; (2) sim-to-real transfer demonstrated through real-world experiments with multiple robots; and (3) the release of the first open-source API for deploying Isaac Lab-trained policies to real robots.
- Score: 15.548637925166986
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autonomous robots must navigate and operate in diverse environments, from terrestrial and aquatic settings to aerial and space domains. While Reinforcement Learning (RL) has shown promise in training policies for specific autonomous robots, existing frameworks and benchmarks are often constrained to unique platforms, limiting generalization and fair comparisons across different mobility systems. In this paper, we present a multi-domain framework for training, evaluating and deploying RL-based navigation policies across diverse robotic platforms and operational environments. Our work presents four key contributions: (1) a scalable and modular framework, facilitating seamless robot-task interchangeability and reproducible training pipelines; (2) sim-to-real transfer demonstrated through real-world experiments with multiple robots, including a satellite robotic simulator, an unmanned surface vessel, and a wheeled ground vehicle; (3) the release of the first open-source API for deploying Isaac Lab-trained policies to real robots, enabling lightweight inference and rapid field validation; and (4) uniform tasks and metrics for cross-medium evaluation, through a unified evaluation testbed to assess performance of navigation tasks in diverse operational conditions (aquatic, terrestrial and space). By ensuring consistency between simulation and real-world deployment, RoboRAN lowers the barrier to developing adaptable RL-based navigation strategies. Its modular design enables straightforward integration of new robots and tasks through predefined templates, fostering reproducibility and extension to diverse domains. To support the community, we release RoboRAN as open-source.
Related papers
- Heterogeneous Robot Collaboration in Unstructured Environments with Grounded Generative Intelligence [54.91177026001217]
Large language model (LLM)-enabled teaming methods typically assume well-structured and known environments.<n>We present SPINE-HT, a framework that addresses these limitations by grounding the reasoning abilities of LLMs in the context of a heterogeneous robot team.<n>Our framework achieves nearly twice the success rate compared to prior LLM-enabled heterogeneous teaming approaches.
arXiv Detail & Related papers (2025-10-30T18:24:38Z) - Space Robotics Bench: Robot Learning Beyond Earth [16.948852537273655]
Space Robotics Bench is an open-source simulation framework for robot learning in space.<n>It integrates on-demand procedural generation with massively parallel simulation environments.<n>It includes a comprehensive suite of benchmark tasks that span a wide range of mission-relevant scenarios.
arXiv Detail & Related papers (2025-09-27T14:28:31Z) - Distributed AI Agents for Cognitive Underwater Robot Autonomy [5.644612398323221]
This paper presents Underwater Robot Self-Organizing Autonomy (UROSA)<n>UROSA is a groundbreaking architecture leveraging distributed Large Language Model AI agents integrated within the Robot Operating System 2 (ROS 2) framework.<n>Central innovations include flexible agents dynamically adapting their roles, retrieval-augmented generation, and autonomous on-the-fly ROS 2 node generation.
arXiv Detail & Related papers (2025-07-31T17:18:55Z) - Deploying Foundation Model-Enabled Air and Ground Robots in the Field: Challenges and Opportunities [65.98704516122228]
The integration of foundation models (FMs) into robotics has enabled robots to understand natural language and reason about the semantics in their environments.<n>This paper addresses the deployment of FM-enabled robots in the field, where missions often require a robot to operate in large-scale and unstructured environments.<n>We present the first demonstration of large-scale LLM-enabled robot planning in unstructured environments with several kilometers of missions.
arXiv Detail & Related papers (2025-05-14T15:28:43Z) - Sim-to-Real Transfer for Mobile Robots with Reinforcement Learning: from NVIDIA Isaac Sim to Gazebo and Real ROS 2 Robots [1.2773537446441052]
This article focuses on demonstrating the applications of Isaac in local planning and obstacle avoidance.<n>We benchmark end-to-end policies with the state-of-the-art Nav2, navigation stack in Robot Operating System (ROS)<n>We also cover the sim-to-real transfer process by demonstrating zero-shot transferability of policies trained in the Isaac simulator to real-world robots.
arXiv Detail & Related papers (2025-01-06T10:26:16Z) - An Open-source Sim2Real Approach for Sensor-independent Robot Navigation in a Grid [0.0]
We bridge the gap between a trained agent in a simulated environment and its real-world implementation in navigating a robot in a similar setting.<n>Specifically, we focus on navigating a quadruped robot in a real-world grid-like environment inspired by the Gymnasium Frozen Lake.
arXiv Detail & Related papers (2024-11-05T20:18:29Z) - Aquatic Navigation: A Challenging Benchmark for Deep Reinforcement Learning [53.3760591018817]
We propose a new benchmarking environment for aquatic navigation using recent advances in the integration between game engines and Deep Reinforcement Learning.
Specifically, we focus on PPO, one of the most widely accepted algorithms, and we propose advanced training techniques.
Our empirical evaluation shows that a well-designed combination of these ingredients can achieve promising results.
arXiv Detail & Related papers (2024-05-30T23:20:23Z) - RoboScript: Code Generation for Free-Form Manipulation Tasks across Real
and Simulation [77.41969287400977]
This paper presents textbfRobotScript, a platform for a deployable robot manipulation pipeline powered by code generation.
We also present a benchmark for a code generation benchmark for robot manipulation tasks in free-form natural language.
We demonstrate the adaptability of our code generation framework across multiple robot embodiments, including the Franka and UR5 robot arms.
arXiv Detail & Related papers (2024-02-22T15:12:00Z) - Co-NavGPT: Multi-Robot Cooperative Visual Semantic Navigation Using Vision Language Models [8.668211481067457]
Co-NavGPT is a novel framework that integrates a Vision Language Model (VLM) as a global planner.<n>Co-NavGPT aggregates sub-maps from multiple robots with diverse viewpoints into a unified global map.<n>The VLM uses this information to assign frontiers across the robots, facilitating coordinated and efficient exploration.
arXiv Detail & Related papers (2023-10-11T23:17:43Z) - NoMaD: Goal Masked Diffusion Policies for Navigation and Exploration [57.15811390835294]
This paper describes how we can train a single unified diffusion policy to handle both goal-directed navigation and goal-agnostic exploration.
We show that this unified policy results in better overall performance when navigating to visually indicated goals in novel environments.
Our experiments, conducted on a real-world mobile robot platform, show effective navigation in unseen environments in comparison with five alternative methods.
arXiv Detail & Related papers (2023-10-11T21:07:14Z) - Principles and Guidelines for Evaluating Social Robot Navigation
Algorithms [44.51586279645062]
Social robot navigation is difficult to evaluate because it involves dynamic human agents and their perceptions of the appropriateness of robot behavior.
Our contributions include (a) a definition of a socially navigating robot as one that respects the principles of safety, comfort, legibility, politeness, social competency, agent understanding, proactivity, and responsiveness to context, (b) guidelines for the use of metrics, development of scenarios, benchmarks, datasets, and simulators to evaluate social navigation, and (c) a social navigation metrics framework to make it easier to compare results from different simulators, robots and datasets.
arXiv Detail & Related papers (2023-06-29T07:31:43Z) - PIC4rl-gym: a ROS2 modular framework for Robots Autonomous Navigation
with Deep Reinforcement Learning [0.4588028371034407]
This work introduces the textitPIC4rl-gym, a fundamental modular framework to enhance navigation and learning research.
The paper describes the whole structure of the PIC4rl-gym, which fully integrates DRL agent's training and testing in several indoor and outdoor navigation scenarios.
A modular approach is adopted to easily customize the simulation by selecting new platforms, sensors, or models.
arXiv Detail & Related papers (2022-11-19T14:58:57Z) - GNM: A General Navigation Model to Drive Any Robot [67.40225397212717]
General goal-conditioned model for vision-based navigation can be trained on data obtained from many distinct but structurally similar robots.
We analyze the necessary design decisions for effective data sharing across robots.
We deploy the trained GNM on a range of new robots, including an under quadrotor.
arXiv Detail & Related papers (2022-10-07T07:26:41Z) - Simultaneous Navigation and Construction Benchmarking Environments [73.0706832393065]
We need intelligent robots for mobile construction, the process of navigating in an environment and modifying its structure according to a geometric design.
In this task, a major robot vision and learning challenge is how to exactly achieve the design without GPS.
We benchmark the performance of a handcrafted policy with basic localization and planning, and state-of-the-art deep reinforcement learning methods.
arXiv Detail & Related papers (2021-03-31T00:05:54Z) - Bayesian Meta-Learning for Few-Shot Policy Adaptation Across Robotic
Platforms [60.59764170868101]
Reinforcement learning methods can achieve significant performance but require a large amount of training data collected on the same robotic platform.
We formulate it as a few-shot meta-learning problem where the goal is to find a model that captures the common structure shared across different robotic platforms.
We experimentally evaluate our framework on a simulated reaching and a real-robot picking task using 400 simulated robots.
arXiv Detail & Related papers (2021-03-05T14:16:20Z) - Embodied Visual Navigation with Automatic Curriculum Learning in Real
Environments [20.017277077448924]
NavACL is a method of automatic curriculum learning tailored to the navigation task.
Deep reinforcement learning agents trained using NavACL significantly outperform state-of-the-art agents trained with uniform sampling.
Our agents can navigate through unknown cluttered indoor environments to semantically-specified targets using only RGB images.
arXiv Detail & Related papers (2020-09-11T13:28:26Z) - ReLMoGen: Leveraging Motion Generation in Reinforcement Learning for
Mobile Manipulation [99.2543521972137]
ReLMoGen is a framework that combines a learned policy to predict subgoals and a motion generator to plan and execute the motion needed to reach these subgoals.
Our method is benchmarked on a diverse set of seven robotics tasks in photo-realistic simulation environments.
ReLMoGen shows outstanding transferability between different motion generators at test time, indicating a great potential to transfer to real robots.
arXiv Detail & Related papers (2020-08-18T08:05:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.