Autonomous Unmanned Aerial Vehicle Navigation using Reinforcement
Learning: A Systematic Review
- URL: http://arxiv.org/abs/2208.12328v1
- Date: Thu, 25 Aug 2022 20:04:11 GMT
- Title: Autonomous Unmanned Aerial Vehicle Navigation using Reinforcement
Learning: A Systematic Review
- Authors: Fadi AlMahamid and Katarina Grolinger
- Abstract summary: Unmanned Aerial Vehicle (UAV) is used in different applications such as packages delivery, traffic monitoring, search and rescue operations, and military combat engagements.
In all of these applications, the UAV is used to navigate the environment autonomously - without human interaction, perform specific tasks and avoid obstacles.
This study first identifies the main UAV navigation tasks and discusses navigation frameworks and simulation software.
Next, RL algorithms are classified and discussed based on the environment, algorithm characteristics, abilities, and applications in different UAV navigation problems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There is an increasing demand for using Unmanned Aerial Vehicle (UAV), known
as drones, in different applications such as packages delivery, traffic
monitoring, search and rescue operations, and military combat engagements. In
all of these applications, the UAV is used to navigate the environment
autonomously - without human interaction, perform specific tasks and avoid
obstacles. Autonomous UAV navigation is commonly accomplished using
Reinforcement Learning (RL), where agents act as experts in a domain to
navigate the environment while avoiding obstacles. Understanding the navigation
environment and algorithmic limitations plays an essential role in choosing the
appropriate RL algorithm to solve the navigation problem effectively.
Consequently, this study first identifies the main UAV navigation tasks and
discusses navigation frameworks and simulation software. Next, RL algorithms
are classified and discussed based on the environment, algorithm
characteristics, abilities, and applications in different UAV navigation
problems, which will help the practitioners and researchers select the
appropriate RL algorithms for their UAV navigation use cases. Moreover,
identified gaps and opportunities will drive UAV navigation research.
Related papers
- DeepAir: A Multi-Agent Deep Reinforcement Learning Based Scheme for an Unknown User Location Problem [6.185645393091031]
The deployment of unmanned aerial vehicles (UAVs) in many different settings has provided various solutions and strategies for networking paradigms.
One of those existing problems is the unknown user locations in an infrastructure-less environment.
In this study, we propose a novel deep reinforcement learning (DRL) based scheme, DeepAir.
arXiv Detail & Related papers (2024-08-11T07:28:35Z) - Research on an Autonomous UAV Search and Rescue System Based on the Improved [1.3399503792039942]
This paper proposes an autonomous search and rescue UAV system based on an EGO-Planner algorithm.
It takes the methods of inverse motor backstepping to enhance the overall flight efficiency of the UAV and miniaturization of the whole machine.
At the same time, the system introduced the EGO-Planner planning tool, which is optimized by a bidirectional A* algorithm along with an object detection algorithm.
arXiv Detail & Related papers (2024-06-01T17:25:29Z) - Convergence of Communications, Control, and Machine Learning for Secure
and Autonomous Vehicle Navigation [78.60496411542549]
Connected and autonomous vehicles (CAVs) can reduce human errors in traffic accidents, increase road efficiency, and execute various tasks. Reaping these benefits requires CAVs to autonomously navigate to target destinations.
This article proposes solutions using the convergence of communication theory, control theory, and machine learning to enable effective and secure CAV navigation.
arXiv Detail & Related papers (2023-07-05T21:38:36Z) - ETPNav: Evolving Topological Planning for Vision-Language Navigation in
Continuous Environments [56.194988818341976]
Vision-language navigation is a task that requires an agent to follow instructions to navigate in environments.
We propose ETPNav, which focuses on two critical skills: 1) the capability to abstract environments and generate long-range navigation plans, and 2) the ability of obstacle-avoiding control in continuous environments.
ETPNav yields more than 10% and 20% improvements over prior state-of-the-art on R2R-CE and RxR-CE datasets.
arXiv Detail & Related papers (2023-04-06T13:07:17Z) - Advanced Algorithms of Collision Free Navigation and Flocking for
Autonomous UAVs [0.0]
This report contributes towards the state-of-the-art in UAV control for safe autonomous navigation and motion coordination of multi-UAV systems.
The first part of this report deals with single-UAV systems. The complex problem of three-dimensional (3D) collision-free navigation in unknown/dynamic environments is addressed.
The second part of this report addresses safe navigation for multi-UAV systems. Distributed motion coordination methods of multi-UAV systems for flocking and 3D area coverage are developed.
arXiv Detail & Related papers (2021-10-30T03:51:40Z) - A Multi-UAV System for Exploration and Target Finding in Cluttered and
GPS-Denied Environments [68.31522961125589]
We propose a framework for a team of UAVs to cooperatively explore and find a target in complex GPS-denied environments with obstacles.
The team of UAVs autonomously navigates, explores, detects, and finds the target in a cluttered environment with a known map.
Results indicate that the proposed multi-UAV system has improvements in terms of time-cost, the proportion of search area surveyed, as well as successful rates for search and rescue missions.
arXiv Detail & Related papers (2021-07-19T12:54:04Z) - Adversarial Environment Generation for Learning to Navigate the Web [107.99759923626242]
One of the bottlenecks of training web navigation agents is providing a learnable curriculum of training environments.
We propose using Adversarial Environment Generation (AEG) to generate challenging web environments in which to train reinforcement learning (RL) agents.
We show that the navigator agent trained with our proposed Flexible b-PAIRED technique significantly outperforms competitive automatic curriculum generation baselines.
arXiv Detail & Related papers (2021-03-02T19:19:30Z) - Active Visual Information Gathering for Vision-Language Navigation [115.40768457718325]
Vision-language navigation (VLN) is the task of entailing an agent to carry out navigational instructions inside photo-realistic environments.
One of the key challenges in VLN is how to conduct a robust navigation by mitigating the uncertainty caused by ambiguous instructions and insufficient observation of the environment.
This work draws inspiration from human navigation behavior and endows an agent with an active information gathering ability for a more intelligent VLN policy.
arXiv Detail & Related papers (2020-07-15T23:54:20Z) - APPLD: Adaptive Planner Parameter Learning from Demonstration [48.63930323392909]
We introduce APPLD, Adaptive Planner Learning from Demonstration, that allows existing navigation systems to be successfully applied to new complex environments.
APPLD is verified on two robots running different navigation systems in different environments.
Experimental results show that APPLD can outperform navigation systems with the default and expert-tuned parameters, and even the human demonstrator themselves.
arXiv Detail & Related papers (2020-03-31T21:15:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.