Connecting Deep-Reinforcement-Learning-based Obstacle Avoidance with
Conventional Global Planners using Waypoint Generators
- URL: http://arxiv.org/abs/2104.03663v1
- Date: Thu, 8 Apr 2021 10:23:23 GMT
- Title: Connecting Deep-Reinforcement-Learning-based Obstacle Avoidance with
Conventional Global Planners using Waypoint Generators
- Authors: Linh K\"astner, Teham Buiyan, Xinlin Zhao, Zhengcheng Shen, Cornelius
Marx and Jens Lambrecht
- Abstract summary: Deep Reinforcement Learning has emerged as an efficient dynamic obstacle avoidance method in highly dynamic environments.
The integration of Deep Reinforcement Learning into existing navigation systems is still an open frontier due to the myopic nature of Deep Reinforcement-Learning-based navigation.
- Score: 1.4680035572775534
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep Reinforcement Learning has emerged as an efficient dynamic obstacle
avoidance method in highly dynamic environments. It has the potential to
replace overly conservative or inefficient navigation approaches. However, the
integration of Deep Reinforcement Learning into existing navigation systems is
still an open frontier due to the myopic nature of
Deep-Reinforcement-Learning-based navigation, which hinders its widespread
integration into current navigation systems. In this paper, we propose the
concept of an intermediate planner to interconnect novel
Deep-Reinforcement-Learning-based obstacle avoidance with conventional global
planning methods using waypoint generation. Therefore, we integrate different
waypoint generators into existing navigation systems and compare the joint
system against traditional ones. We found an increased performance in terms of
safety, efficiency and path smoothness especially in highly dynamic
environments.
Related papers
- Long-distance Geomagnetic Navigation in GNSS-denied Environments with Deep Reinforcement Learning [62.186340267690824]
Existing studies on geomagnetic navigation rely on pre-stored map or extensive searches, leading to limited applicability or reduced navigation efficiency in unexplored areas.
This paper develops a deep reinforcement learning (DRL)-based mechanism, especially for long-distance geomagnetic navigation.
The designed mechanism trains an agent to learn and gain the magnetoreception capacity for geomagnetic navigation, rather than using any pre-stored map or extensive and expensive searching approaches.
arXiv Detail & Related papers (2024-10-21T09:57:42Z) - Aquatic Navigation: A Challenging Benchmark for Deep Reinforcement Learning [53.3760591018817]
We propose a new benchmarking environment for aquatic navigation using recent advances in the integration between game engines and Deep Reinforcement Learning.
Specifically, we focus on PPO, one of the most widely accepted algorithms, and we propose advanced training techniques.
Our empirical evaluation shows that a well-designed combination of these ingredients can achieve promising results.
arXiv Detail & Related papers (2024-05-30T23:20:23Z) - TOP-Nav: Legged Navigation Integrating Terrain, Obstacle and Proprioception Estimation [5.484041860401147]
TOP-Nav is a novel legged navigation framework that integrates a comprehensive path planner with Terrain awareness, Obstacle avoidance and close-loop Proprioception.
We show that TOP-Nav achieves open-world navigation that the robot can handle terrains or disturbances beyond the distribution of prior knowledge.
arXiv Detail & Related papers (2024-04-23T17:42:45Z) - How To Not Train Your Dragon: Training-free Embodied Object Goal
Navigation with Semantic Frontiers [94.46825166907831]
We present a training-free solution to tackle the object goal navigation problem in Embodied AI.
Our method builds a structured scene representation based on the classic visual simultaneous localization and mapping (V-SLAM) framework.
Our method propagates semantics on the scene graphs based on language priors and scene statistics to introduce semantic knowledge to the geometric frontiers.
arXiv Detail & Related papers (2023-05-26T13:38:33Z) - ETPNav: Evolving Topological Planning for Vision-Language Navigation in
Continuous Environments [56.194988818341976]
Vision-language navigation is a task that requires an agent to follow instructions to navigate in environments.
We propose ETPNav, which focuses on two critical skills: 1) the capability to abstract environments and generate long-range navigation plans, and 2) the ability of obstacle-avoiding control in continuous environments.
ETPNav yields more than 10% and 20% improvements over prior state-of-the-art on R2R-CE and RxR-CE datasets.
arXiv Detail & Related papers (2023-04-06T13:07:17Z) - Holistic Deep-Reinforcement-Learning-based Training of Autonomous
Navigation Systems [4.409836695738518]
Deep Reinforcement Learning emerged as a promising approach for autonomous navigation of ground vehicles.
In this paper, we propose a holistic Deep Reinforcement Learning training approach involving all entities of the navigation stack.
arXiv Detail & Related papers (2023-02-06T16:52:15Z) - Learning Forward Dynamics Model and Informed Trajectory Sampler for Safe
Quadruped Navigation [1.2783783498844021]
A typical SOTA system is composed of four main modules -- mapper, global planner, local planner, and command-tracking controller.
We build a robust and safe local planner which is designed to generate a velocity plan to track a coarsely planned path from the global planner.
Using our framework, a quadruped robot can autonomously navigate in various complex environments without a collision and generate a smoother command plan compared to the baseline method.
arXiv Detail & Related papers (2022-04-19T04:01:44Z) - Visual-Language Navigation Pretraining via Prompt-based Environmental
Self-exploration [83.96729205383501]
We introduce prompt-based learning to achieve fast adaptation for language embeddings.
Our model can adapt to diverse vision-language navigation tasks, including VLN and REVERIE.
arXiv Detail & Related papers (2022-03-08T11:01:24Z) - Benchmarking Safe Deep Reinforcement Learning in Aquatic Navigation [78.17108227614928]
We propose a benchmark environment for Safe Reinforcement Learning focusing on aquatic navigation.
We consider a value-based and policy-gradient Deep Reinforcement Learning (DRL)
We also propose a verification strategy that checks the behavior of the trained models over a set of desired properties.
arXiv Detail & Related papers (2021-12-16T16:53:56Z) - Enhancing Navigational Safety in Crowded Environments using
Semantic-Deep-Reinforcement-Learning-based Navigation [5.706538676509249]
We propose a semantic Deep-reinforcement-learning-based navigation approach that teaches object-specific safety rules by considering high-level obstacle information.
We demonstrate that the agent could learn to navigate more safely by keeping an individual safety distance dependent on the semantic information.
arXiv Detail & Related papers (2021-09-23T10:50:47Z) - Towards Deployment of Deep-Reinforcement-Learning-Based Obstacle
Avoidance into Conventional Autonomous Navigation Systems [10.349425078806751]
Deep reinforcement learning emerged as an alternative planning method to replace overly conservative approaches.
Deep reinforcement learning approaches are not suitable for long-range navigation due to their proneness to local minima.
In this paper, we propose a navigation system incorporating deep-reinforcement-learning-based local planners into conventional navigation stacks for long-range navigation.
arXiv Detail & Related papers (2021-04-08T08:56:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.