Benchmarking Reinforcement Learning Techniques for Autonomous Navigation
- URL: http://arxiv.org/abs/2210.04839v2
- Date: Tue, 27 Jun 2023 16:17:17 GMT
- Title: Benchmarking Reinforcement Learning Techniques for Autonomous Navigation
- Authors: Zifan Xu, Bo Liu, Xuesu Xiao, Anirudh Nair and Peter Stone
- Abstract summary: Deep reinforcement learning (RL) has brought many successes for autonomous robot navigation.
There still exists important limitations that prevent real-world use of RL-based navigation systems.
- Score: 41.1337061798188
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep reinforcement learning (RL) has brought many successes for autonomous
robot navigation. However, there still exists important limitations that
prevent real-world use of RL-based navigation systems. For example, most
learning approaches lack safety guarantees; and learned navigation systems may
not generalize well to unseen environments. Despite a variety of recent
learning techniques to tackle these challenges in general, a lack of an
open-source benchmark and reproducible learning methods specifically for
autonomous navigation makes it difficult for roboticists to choose what
learning methods to use for their mobile robots and for learning researchers to
identify current shortcomings of general learning methods for autonomous
navigation. In this paper, we identify four major desiderata of applying deep
RL approaches for autonomous navigation: (D1) reasoning under uncertainty, (D2)
safety, (D3) learning from limited trial-and-error data, and (D4)
generalization to diverse and novel environments. Then, we explore four major
classes of learning techniques with the purpose of achieving one or more of the
four desiderata: memory-based neural network architectures (D1), safe RL (D2),
model-based RL (D2, D3), and domain randomization (D4). By deploying these
learning techniques in a new open-source large-scale navigation benchmark and
real-world environments, we perform a comprehensive study aimed at establishing
to what extent can these techniques achieve these desiderata for RL-based
navigation systems.
Related papers
- Aquatic Navigation: A Challenging Benchmark for Deep Reinforcement Learning [53.3760591018817]
We propose a new benchmarking environment for aquatic navigation using recent advances in the integration between game engines and Deep Reinforcement Learning.
Specifically, we focus on PPO, one of the most widely accepted algorithms, and we propose advanced training techniques.
Our empirical evaluation shows that a well-designed combination of these ingredients can achieve promising results.
arXiv Detail & Related papers (2024-05-30T23:20:23Z) - NavCoT: Boosting LLM-Based Vision-and-Language Navigation via Learning
Disentangled Reasoning [101.56342075720588]
Vision-and-Language Navigation (VLN), as a crucial research problem of Embodied AI, requires an embodied agent to navigate through complex 3D environments following natural language instructions.
Recent research has highlighted the promising capacity of large language models (LLMs) in VLN by improving navigational reasoning accuracy and interpretability.
This paper introduces a novel strategy called Navigational Chain-of-Thought (NavCoT), where we fulfill parameter-efficient in-domain training to enable self-guided navigational decision.
arXiv Detail & Related papers (2024-03-12T07:27:02Z) - Enhanced Low-Dimensional Sensing Mapless Navigation of Terrestrial
Mobile Robots Using Double Deep Reinforcement Learning Techniques [1.191504645891765]
We present two distinct approaches aimed at enhancing mapless navigation for a ground-based mobile robot.
The research methodology primarily involves a comparative analysis between a Deep-RL strategy grounded in the foundational Deep Q-Network (DQN) algorithm, and an alternative approach based on the Double Deep Q-Network (DDQN) algorithm.
The proposed methodology is evaluated in three different real environments, revealing that Double Deep structures significantly enhance the navigation capabilities of mobile robots compared to simple Q structures.
arXiv Detail & Related papers (2023-10-20T20:47:07Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - Human-Aware Robot Navigation via Reinforcement Learning with Hindsight
Experience Replay and Curriculum Learning [28.045441768064215]
Reinforcement learning approaches have shown superior ability in solving sequential decision making problems.
In this work, we consider the task of training an RL agent without employing the demonstration data.
We propose to incorporate the hindsight experience replay (HER) and curriculum learning (CL) techniques with RL to efficiently learn the optimal navigation policy in the dense crowd.
arXiv Detail & Related papers (2021-10-09T13:18:11Z) - ReLMM: Practical RL for Learning Mobile Manipulation Skills Using Only
Onboard Sensors [64.2809875343854]
We study how robots can autonomously learn skills that require a combination of navigation and grasping.
Our system, ReLMM, can learn continuously on a real-world platform without any environment instrumentation.
After a grasp curriculum training phase, ReLMM can learn navigation and grasping together fully automatically, in around 40 hours of real-world training.
arXiv Detail & Related papers (2021-07-28T17:59:41Z) - Rule-Based Reinforcement Learning for Efficient Robot Navigation with
Space Reduction [8.279526727422288]
In this paper, we focus on efficient navigation with the reinforcement learning (RL) technique.
We employ a reduction rule to shrink the trajectory, which in turn effectively reduces the redundant exploration space.
Experiments conducted on real robot navigation problems in hex-grid environments demonstrate that RuRL can achieve improved navigation performance.
arXiv Detail & Related papers (2021-04-15T07:40:27Z) - Towards Deployment of Deep-Reinforcement-Learning-Based Obstacle
Avoidance into Conventional Autonomous Navigation Systems [10.349425078806751]
Deep reinforcement learning emerged as an alternative planning method to replace overly conservative approaches.
Deep reinforcement learning approaches are not suitable for long-range navigation due to their proneness to local minima.
In this paper, we propose a navigation system incorporating deep-reinforcement-learning-based local planners into conventional navigation stacks for long-range navigation.
arXiv Detail & Related papers (2021-04-08T08:56:53Z) - Enhanced Adversarial Strategically-Timed Attacks against Deep
Reinforcement Learning [91.13113161754022]
We introduce timing-based adversarial strategies against a DRL-based navigation system by jamming in physical noise patterns on the selected time frames.
Our experimental results show that the adversarial timing attacks can lead to a significant performance drop.
arXiv Detail & Related papers (2020-02-20T21:39:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.