Holistic Deep-Reinforcement-Learning-based Training of Autonomous
Navigation Systems
- URL: http://arxiv.org/abs/2302.02921v1
- Date: Mon, 6 Feb 2023 16:52:15 GMT
- Title: Holistic Deep-Reinforcement-Learning-based Training of Autonomous
Navigation Systems
- Authors: Linh K\"astner, Marvin Meusel, Teham Bhuiyan, and Jens Lambrecht
- Abstract summary: Deep Reinforcement Learning emerged as a promising approach for autonomous navigation of ground vehicles.
In this paper, we propose a holistic Deep Reinforcement Learning training approach involving all entities of the navigation stack.
- Score: 4.409836695738518
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, Deep Reinforcement Learning emerged as a promising approach
for autonomous navigation of ground vehicles and has been utilized in various
areas of navigation such as cruise control, lane changing, or obstacle
avoidance. However, most research works either focus on providing an end-to-end
solution training the whole system using Deep Reinforcement Learning or focus
on one specific aspect such as local motion planning. This however, comes along
with a number of problems such as catastrophic forgetfulness, inefficient
navigation behavior, and non-optimal synchronization between different entities
of the navigation stack. In this paper, we propose a holistic Deep
Reinforcement Learning training approach in which the training procedure is
involving all entities of the navigation stack. This should enhance the
synchronization between- and understanding of all entities of the navigation
stack and as a result, improve navigational performance. We trained several
agents with a number of different observation spaces to study the impact of
different input on the navigation behavior of the agent. In profound
evaluations against multiple learning-based and classic model-based navigation
approaches, our proposed agent could outperform the baselines in terms of
efficiency and safety attaining shorter path lengths, less roundabout paths,
and less collisions.
Related papers
- Two-Stage Depth Enhanced Learning with Obstacle Map For Object Navigation [11.667940255053582]
This paper uses the RGB and depth information of the training scene to pretrain the feature extractor, which improves navigation efficiency.
We evaluated our method on AI2-Thor and RoboTHOR and demonstrated that it significantly outperforms state-of-the-art (SOTA) methods on success rate and navigation efficiency.
arXiv Detail & Related papers (2024-06-20T08:35:10Z) - Learning Robust Autonomous Navigation and Locomotion for Wheeled-Legged Robots [50.02055068660255]
Navigating urban environments poses unique challenges for robots, necessitating innovative solutions for locomotion and navigation.
This work introduces a fully integrated system comprising adaptive locomotion control, mobility-aware local navigation planning, and large-scale path planning within the city.
Using model-free reinforcement learning (RL) techniques and privileged learning, we develop a versatile locomotion controller.
Our controllers are integrated into a large-scale urban navigation system and validated by autonomous, kilometer-scale navigation missions conducted in Zurich, Switzerland, and Seville, Spain.
arXiv Detail & Related papers (2024-05-03T00:29:20Z) - TOP-Nav: Legged Navigation Integrating Terrain, Obstacle and Proprioception Estimation [5.484041860401147]
TOP-Nav is a novel legged navigation framework that integrates a comprehensive path planner with Terrain awareness, Obstacle avoidance and close-loop Proprioception.
We show that TOP-Nav achieves open-world navigation that the robot can handle terrains or disturbances beyond the distribution of prior knowledge.
arXiv Detail & Related papers (2024-04-23T17:42:45Z) - ETPNav: Evolving Topological Planning for Vision-Language Navigation in
Continuous Environments [56.194988818341976]
Vision-language navigation is a task that requires an agent to follow instructions to navigate in environments.
We propose ETPNav, which focuses on two critical skills: 1) the capability to abstract environments and generate long-range navigation plans, and 2) the ability of obstacle-avoiding control in continuous environments.
ETPNav yields more than 10% and 20% improvements over prior state-of-the-art on R2R-CE and RxR-CE datasets.
arXiv Detail & Related papers (2023-04-06T13:07:17Z) - Robot path planning using deep reinforcement learning [0.0]
Reinforcement learning methods offer an alternative to map-free navigation tasks.
Deep reinforcement learning agents are implemented for both the obstacle avoidance and the goal-oriented navigation task.
An analysis of the changes in the behaviour and performance of the agents caused by modifications in the reward function is conducted.
arXiv Detail & Related papers (2023-02-17T20:08:59Z) - Augmented reality navigation system for visual prosthesis [67.09251544230744]
We propose an augmented reality navigation system for visual prosthesis that incorporates a software of reactive navigation and path planning.
It consists on four steps: locating the subject on a map, planning the subject trajectory, showing it to the subject and re-planning without obstacles.
Results show how our augmented navigation system help navigation performance by reducing the time and distance to reach the goals, even significantly reducing the number of obstacles collisions.
arXiv Detail & Related papers (2021-09-30T09:41:40Z) - Adversarial Reinforced Instruction Attacker for Robust Vision-Language
Navigation [145.84123197129298]
Language instruction plays an essential role in the natural language grounded navigation tasks.
We exploit to train a more robust navigator which is capable of dynamically extracting crucial factors from the long instruction.
Specifically, we propose a Dynamic Reinforced Instruction Attacker (DR-Attacker), which learns to mislead the navigator to move to the wrong target.
arXiv Detail & Related papers (2021-07-23T14:11:31Z) - Deep Learning for Embodied Vision Navigation: A Survey [108.13766213265069]
"Embodied visual navigation" problem requires an agent to navigate in a 3D environment mainly rely on its first-person observation.
This paper attempts to establish an outline of the current works in the field of embodied visual navigation by providing a comprehensive literature survey.
arXiv Detail & Related papers (2021-07-07T12:09:04Z) - Connecting Deep-Reinforcement-Learning-based Obstacle Avoidance with
Conventional Global Planners using Waypoint Generators [1.4680035572775534]
Deep Reinforcement Learning has emerged as an efficient dynamic obstacle avoidance method in highly dynamic environments.
The integration of Deep Reinforcement Learning into existing navigation systems is still an open frontier due to the myopic nature of Deep Reinforcement-Learning-based navigation.
arXiv Detail & Related papers (2021-04-08T10:23:23Z) - Towards Deployment of Deep-Reinforcement-Learning-Based Obstacle
Avoidance into Conventional Autonomous Navigation Systems [10.349425078806751]
Deep reinforcement learning emerged as an alternative planning method to replace overly conservative approaches.
Deep reinforcement learning approaches are not suitable for long-range navigation due to their proneness to local minima.
In this paper, we propose a navigation system incorporating deep-reinforcement-learning-based local planners into conventional navigation stacks for long-range navigation.
arXiv Detail & Related papers (2021-04-08T08:56:53Z) - Active Visual Information Gathering for Vision-Language Navigation [115.40768457718325]
Vision-language navigation (VLN) is the task of entailing an agent to carry out navigational instructions inside photo-realistic environments.
One of the key challenges in VLN is how to conduct a robust navigation by mitigating the uncertainty caused by ambiguous instructions and insufficient observation of the environment.
This work draws inspiration from human navigation behavior and endows an agent with an active information gathering ability for a more intelligent VLN policy.
arXiv Detail & Related papers (2020-07-15T23:54:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.