Towards Deployment of Deep-Reinforcement-Learning-Based Obstacle
Avoidance into Conventional Autonomous Navigation Systems
- URL: http://arxiv.org/abs/2104.03616v1
- Date: Thu, 8 Apr 2021 08:56:53 GMT
- Title: Towards Deployment of Deep-Reinforcement-Learning-Based Obstacle
Avoidance into Conventional Autonomous Navigation Systems
- Authors: Linh K\"astner, Teham Buiyan, Xinlin Zhao, Lei Jiao, Zhengcheng Shen
and Jens Lambrecht
- Abstract summary: Deep reinforcement learning emerged as an alternative planning method to replace overly conservative approaches.
Deep reinforcement learning approaches are not suitable for long-range navigation due to their proneness to local minima.
In this paper, we propose a navigation system incorporating deep-reinforcement-learning-based local planners into conventional navigation stacks for long-range navigation.
- Score: 10.349425078806751
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recently, mobile robots have become important tools in various industries,
especially in logistics. Deep reinforcement learning emerged as an alternative
planning method to replace overly conservative approaches and promises more
efficient and flexible navigation. However, deep reinforcement learning
approaches are not suitable for long-range navigation due to their proneness to
local minima and lack of long term memory, which hinders its widespread
integration into industrial applications of mobile robotics. In this paper, we
propose a navigation system incorporating deep-reinforcement-learning-based
local planners into conventional navigation stacks for long-range navigation.
Therefore, a framework for training and testing the deep reinforcement learning
algorithms along with classic approaches is presented. We evaluated our
deep-reinforcement-learning-enhanced navigation system against various
conventional planners and found that our system outperforms them in terms of
safety, efficiency and robustness.
Related papers
- Long-distance Geomagnetic Navigation in GNSS-denied Environments with Deep Reinforcement Learning [62.186340267690824]
Existing studies on geomagnetic navigation rely on pre-stored map or extensive searches, leading to limited applicability or reduced navigation efficiency in unexplored areas.
This paper develops a deep reinforcement learning (DRL)-based mechanism, especially for long-distance geomagnetic navigation.
The designed mechanism trains an agent to learn and gain the magnetoreception capacity for geomagnetic navigation, rather than using any pre-stored map or extensive and expensive searching approaches.
arXiv Detail & Related papers (2024-10-21T09:57:42Z) - Hyp2Nav: Hyperbolic Planning and Curiosity for Crowd Navigation [58.574464340559466]
We advocate for hyperbolic learning to enable crowd navigation and we introduce Hyp2Nav.
Hyp2Nav leverages the intrinsic properties of hyperbolic geometry to better encode the hierarchical nature of decision-making processes in navigation tasks.
We propose a hyperbolic policy model and a hyperbolic curiosity module that results in effective social navigation, best success rates, and returns across multiple simulation settings.
arXiv Detail & Related papers (2024-07-18T14:40:33Z) - Learning Robust Autonomous Navigation and Locomotion for Wheeled-Legged Robots [50.02055068660255]
Navigating urban environments poses unique challenges for robots, necessitating innovative solutions for locomotion and navigation.
This work introduces a fully integrated system comprising adaptive locomotion control, mobility-aware local navigation planning, and large-scale path planning within the city.
Using model-free reinforcement learning (RL) techniques and privileged learning, we develop a versatile locomotion controller.
Our controllers are integrated into a large-scale urban navigation system and validated by autonomous, kilometer-scale navigation missions conducted in Zurich, Switzerland, and Seville, Spain.
arXiv Detail & Related papers (2024-05-03T00:29:20Z) - ETPNav: Evolving Topological Planning for Vision-Language Navigation in
Continuous Environments [56.194988818341976]
Vision-language navigation is a task that requires an agent to follow instructions to navigate in environments.
We propose ETPNav, which focuses on two critical skills: 1) the capability to abstract environments and generate long-range navigation plans, and 2) the ability of obstacle-avoiding control in continuous environments.
ETPNav yields more than 10% and 20% improvements over prior state-of-the-art on R2R-CE and RxR-CE datasets.
arXiv Detail & Related papers (2023-04-06T13:07:17Z) - Robot path planning using deep reinforcement learning [0.0]
Reinforcement learning methods offer an alternative to map-free navigation tasks.
Deep reinforcement learning agents are implemented for both the obstacle avoidance and the goal-oriented navigation task.
An analysis of the changes in the behaviour and performance of the agents caused by modifications in the reward function is conducted.
arXiv Detail & Related papers (2023-02-17T20:08:59Z) - Holistic Deep-Reinforcement-Learning-based Training of Autonomous
Navigation Systems [4.409836695738518]
Deep Reinforcement Learning emerged as a promising approach for autonomous navigation of ground vehicles.
In this paper, we propose a holistic Deep Reinforcement Learning training approach involving all entities of the navigation stack.
arXiv Detail & Related papers (2023-02-06T16:52:15Z) - Offline Reinforcement Learning for Visual Navigation [66.88830049694457]
ReViND is the first offline RL system for robotic navigation that can leverage previously collected data to optimize user-specified reward functions in the real-world.
We show that ReViND can navigate to distant goals using only offline training from this dataset, and exhibit behaviors that qualitatively differ based on the user-specified reward function.
arXiv Detail & Related papers (2022-12-16T02:23:50Z) - Benchmarking Reinforcement Learning Techniques for Autonomous Navigation [41.1337061798188]
Deep reinforcement learning (RL) has brought many successes for autonomous robot navigation.
There still exists important limitations that prevent real-world use of RL-based navigation systems.
arXiv Detail & Related papers (2022-10-10T16:53:42Z) - Enhancing Navigational Safety in Crowded Environments using
Semantic-Deep-Reinforcement-Learning-based Navigation [5.706538676509249]
We propose a semantic Deep-reinforcement-learning-based navigation approach that teaches object-specific safety rules by considering high-level obstacle information.
We demonstrate that the agent could learn to navigate more safely by keeping an individual safety distance dependent on the semantic information.
arXiv Detail & Related papers (2021-09-23T10:50:47Z) - Connecting Deep-Reinforcement-Learning-based Obstacle Avoidance with
Conventional Global Planners using Waypoint Generators [1.4680035572775534]
Deep Reinforcement Learning has emerged as an efficient dynamic obstacle avoidance method in highly dynamic environments.
The integration of Deep Reinforcement Learning into existing navigation systems is still an open frontier due to the myopic nature of Deep Reinforcement-Learning-based navigation.
arXiv Detail & Related papers (2021-04-08T10:23:23Z) - Enhanced Adversarial Strategically-Timed Attacks against Deep
Reinforcement Learning [91.13113161754022]
We introduce timing-based adversarial strategies against a DRL-based navigation system by jamming in physical noise patterns on the selected time frames.
Our experimental results show that the adversarial timing attacks can lead to a significant performance drop.
arXiv Detail & Related papers (2020-02-20T21:39:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.