PIC4rl-gym: a ROS2 modular framework for Robots Autonomous Navigation
with Deep Reinforcement Learning
- URL: http://arxiv.org/abs/2211.10714v1
- Date: Sat, 19 Nov 2022 14:58:57 GMT
- Title: PIC4rl-gym: a ROS2 modular framework for Robots Autonomous Navigation
with Deep Reinforcement Learning
- Authors: Mauro Martini, Andrea Eirale, Simone Cerrato, Marcello Chiaberge
- Abstract summary: This work introduces the textitPIC4rl-gym, a fundamental modular framework to enhance navigation and learning research.
The paper describes the whole structure of the PIC4rl-gym, which fully integrates DRL agent's training and testing in several indoor and outdoor navigation scenarios.
A modular approach is adopted to easily customize the simulation by selecting new platforms, sensors, or models.
- Score: 0.4588028371034407
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning agents can optimize standard autonomous navigation improving
flexibility, efficiency, and computational cost of the system by adopting a
wide variety of approaches. This work introduces the \textit{PIC4rl-gym}, a
fundamental modular framework to enhance navigation and learning research by
mixing ROS2 and Gazebo, the standard tools of the robotics community, with Deep
Reinforcement Learning (DRL). The paper describes the whole structure of the
PIC4rl-gym, which fully integrates DRL agent's training and testing in several
indoor and outdoor navigation scenarios and tasks. A modular approach is
adopted to easily customize the simulation by selecting new platforms, sensors,
or models. We demonstrate the potential of our novel gym by benchmarking the
resulting policies, trained for different navigation tasks, with a complete set
of metrics.
Related papers
- Deep Learning-Based Multi-Modal Fusion for Robust Robot Perception and Navigation [1.71849622776539]
This paper introduces a novel deep learning-based multimodal fusion architecture aimed at enhancing the perception capabilities of autonomous navigation robots.
By utilizing innovative feature extraction modules, adaptive fusion strategies, and time-series modeling mechanisms, the system effectively integrates RGB images and LiDAR data.
arXiv Detail & Related papers (2025-04-26T19:04:21Z) - Gaussian Splatting to Real World Flight Navigation Transfer with Liquid Networks [93.38375271826202]
We present a method to improve generalization and robustness to distribution shifts in sim-to-real visual quadrotor navigation tasks.
We first build a simulator by integrating Gaussian splatting with quadrotor flight dynamics, and then, train robust navigation policies using Liquid neural networks.
In this way, we obtain a full-stack imitation learning protocol that combines advances in 3D Gaussian splatting radiance field rendering, programming of expert demonstration training data, and the task understanding capabilities of Liquid networks.
arXiv Detail & Related papers (2024-06-21T13:48:37Z) - CarDreamer: Open-Source Learning Platform for World Model based Autonomous Driving [25.49856190295859]
World model (WM) based reinforcement learning (RL) has emerged as a promising approach by learning and predicting the complex dynamics of various environments.
There does not exist an accessible platform for training and testing such algorithms in sophisticated driving environments.
We introduce CarDreamer, the first open-source learning platform designed specifically for developing WM based autonomous driving algorithms.
arXiv Detail & Related papers (2024-05-15T05:57:20Z) - NoMaD: Goal Masked Diffusion Policies for Navigation and Exploration [57.15811390835294]
This paper describes how we can train a single unified diffusion policy to handle both goal-directed navigation and goal-agnostic exploration.
We show that this unified policy results in better overall performance when navigating to visually indicated goals in novel environments.
Our experiments, conducted on a real-world mobile robot platform, show effective navigation in unseen environments in comparison with five alternative methods.
arXiv Detail & Related papers (2023-10-11T21:07:14Z) - DRIFT: Deep Reinforcement Learning for Intelligent Floating Platforms Trajectories [18.420795137038677]
Floating platforms serve as versatile test-beds to emulate micro-gravity environments on Earth.
Our suite achieves robustness, adaptability, and good transferability from simulation to reality.
arXiv Detail & Related papers (2023-10-06T14:11:35Z) - ViNT: A Foundation Model for Visual Navigation [52.2571739391896]
Visual Navigation Transformer (ViNT) is a foundation model for vision-based robotic navigation.
ViNT is trained with a general goal-reaching objective that can be used with any navigation dataset.
It exhibits positive transfer, outperforming specialist models trained on singular datasets.
arXiv Detail & Related papers (2023-06-26T16:57:03Z) - Orbit: A Unified Simulation Framework for Interactive Robot Learning
Environments [38.23943905182543]
We present Orbit, a unified and modular framework for robot learning powered by NVIDIA Isaac Sim.
It offers a modular design to create robotic environments with photo-realistic scenes and high-fidelity rigid and deformable body simulation.
We aim to support various research areas, including representation learning, reinforcement learning, imitation learning, and task and motion planning.
arXiv Detail & Related papers (2023-01-10T20:19:17Z) - GNM: A General Navigation Model to Drive Any Robot [67.40225397212717]
General goal-conditioned model for vision-based navigation can be trained on data obtained from many distinct but structurally similar robots.
We analyze the necessary design decisions for effective data sharing across robots.
We deploy the trained GNM on a range of new robots, including an under quadrotor.
arXiv Detail & Related papers (2022-10-07T07:26:41Z) - Bayesian Generational Population-Based Training [35.70338636901159]
Population-Based Training (PBT) has led to impressive performance in several large scale settings.
We introduce two new innovations in PBT-style methods.
We show that these innovations lead to large performance gains.
arXiv Detail & Related papers (2022-07-19T16:57:38Z) - Multitask Adaptation by Retrospective Exploration with Learned World
Models [77.34726150561087]
We propose a meta-learned addressing model called RAMa that provides training samples for the MBRL agent taken from task-agnostic storage.
The model is trained to maximize the expected agent's performance by selecting promising trajectories solving prior tasks from the storage.
arXiv Detail & Related papers (2021-10-25T20:02:57Z) - Multi-Robot Deep Reinforcement Learning for Mobile Navigation [82.62621210336881]
We propose a deep reinforcement learning algorithm with hierarchically integrated models (HInt)
At training time, HInt learns separate perception and dynamics models, and at test time, HInt integrates the two models in a hierarchical manner and plans actions with the integrated model.
Our mobile navigation experiments show that HInt outperforms conventional hierarchical policies and single-source approaches.
arXiv Detail & Related papers (2021-06-24T19:07:40Z) - Embodied Visual Navigation with Automatic Curriculum Learning in Real
Environments [20.017277077448924]
NavACL is a method of automatic curriculum learning tailored to the navigation task.
Deep reinforcement learning agents trained using NavACL significantly outperform state-of-the-art agents trained with uniform sampling.
Our agents can navigate through unknown cluttered indoor environments to semantically-specified targets using only RGB images.
arXiv Detail & Related papers (2020-09-11T13:28:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.