A deep reinforcement learning approach to assess the low-altitude
airspace capacity for urban air mobility
- URL: http://arxiv.org/abs/2301.09758v1
- Date: Mon, 23 Jan 2023 23:38:05 GMT
- Title: A deep reinforcement learning approach to assess the low-altitude
airspace capacity for urban air mobility
- Authors: Asal Mehditabrizi, Mahdi Samadzad, Sina Sabzekar
- Abstract summary: Urban air mobility aims to provide a fast and secure way of travel by utilizing the low-altitude airspace.
Authorities are still working on the redaction of new flight rules applicable to urban air mobility.
An autonomous UAV path planning framework is proposed using a deep reinforcement learning approach and a deep deterministic policy gradient algorithm.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Urban air mobility is the new mode of transportation aiming to provide a fast
and secure way of travel by utilizing the low-altitude airspace. This goal
cannot be achieved without the implementation of new flight regulations which
can assure safe and efficient allocation of flight paths to a large number of
vertical takeoff/landing aerial vehicles. Such rules should also allow
estimating the effective capacity of the low-altitude airspace for planning
purposes. Path planning is a vital subject in urban air mobility which could
enable a large number of UAVs to fly simultaneously in the airspace without
facing the risk of collision. Since urban air mobility is a novel concept,
authorities are still working on the redaction of new flight rules applicable
to urban air mobility. In this study, an autonomous UAV path planning framework
is proposed using a deep reinforcement learning approach and a deep
deterministic policy gradient algorithm. The objective is to employ a
self-trained UAV to reach its destination in the shortest possible time in any
arbitrary environment by adjusting its acceleration. It should avoid collisions
with any dynamic or static obstacles and avoid entering prior permission zones
existing on its path. The reward function is the determinant factor in the
training process. Thus, two different reward function compositions are compared
and the chosen composition is deployed to train the UAV by coding the RL
algorithm in python. Finally, numerical simulations investigated the success
rate of UAVs in different scenarios providing an estimate of the effective
airspace capacity.
Related papers
- Self-organized free-flight arrival for urban air mobility [0.9217021281095907]
Urban air mobility is an innovative mode of transportation in which electric vertical takeoff and landing (eVTOL) vehicles operate between nodes called vertiports.
We outline a self-organized vertiport arrival system based on deep reinforcement learning.
Each aircraft is considered an individual agent and follows a shared policy, resulting in decentralized actions that are based on local information.
arXiv Detail & Related papers (2024-04-04T13:43:17Z) - Integrated Conflict Management for UAM with Strategic Demand Capacity
Balancing and Learning-based Tactical Deconfliction [3.074861321741328]
We propose a novel framework that combines demand capacity balancing (DCB) for strategic conflict management and reinforcement learning for tactical separation.
Our results indicate that this DCB preconditioning can allow target levels of safety to be met that are otherwise impossible.
arXiv Detail & Related papers (2023-05-17T20:23:18Z) - Neural-Fly Enables Rapid Learning for Agile Flight in Strong Winds [96.74836678572582]
We present a learning-based approach that allows rapid online adaptation by incorporating pretrained representations through deep learning.
Neural-Fly achieves precise flight control with substantially smaller tracking error than state-of-the-art nonlinear and adaptive controllers.
arXiv Detail & Related papers (2022-05-13T21:55:28Z) - Obstacle Avoidance for UAS in Continuous Action Space Using Deep
Reinforcement Learning [9.891207216312937]
Obstacle avoidance for small unmanned aircraft is vital for the safety of future urban air mobility.
We propose a deep reinforcement learning algorithm based on Proximal Policy Optimization (PPO) to guide autonomous UAS to their destinations.
Results show that the proposed model can provide accurate and robust guidance and resolve conflict with a success rate of over 99%.
arXiv Detail & Related papers (2021-11-13T04:44:53Z) - A Multi-UAV System for Exploration and Target Finding in Cluttered and
GPS-Denied Environments [68.31522961125589]
We propose a framework for a team of UAVs to cooperatively explore and find a target in complex GPS-denied environments with obstacles.
The team of UAVs autonomously navigates, explores, detects, and finds the target in a cluttered environment with a known map.
Results indicate that the proposed multi-UAV system has improvements in terms of time-cost, the proportion of search area surveyed, as well as successful rates for search and rescue missions.
arXiv Detail & Related papers (2021-07-19T12:54:04Z) - 3D UAV Trajectory and Data Collection Optimisation via Deep
Reinforcement Learning [75.78929539923749]
Unmanned aerial vehicles (UAVs) are now beginning to be deployed for enhancing the network performance and coverage in wireless communication.
It is challenging to obtain an optimal resource allocation scheme for the UAV-assisted Internet of Things (IoT)
In this paper, we design a new UAV-assisted IoT systems relying on the shortest flight path of the UAVs while maximising the amount of data collected from IoT devices.
arXiv Detail & Related papers (2021-06-06T14:08:41Z) - Efficient UAV Trajectory-Planning using Economic Reinforcement Learning [65.91405908268662]
We introduce REPlanner, a novel reinforcement learning algorithm inspired by economic transactions to distribute tasks between UAVs.
We formulate the path planning problem as a multi-agent economic game, where agents can cooperate and compete for resources.
As the system computes task distributions via UAV cooperation, it is highly resilient to any change in the swarm size.
arXiv Detail & Related papers (2021-03-03T20:54:19Z) - An Autonomous Free Airspace En-route Controller using Deep Reinforcement
Learning Techniques [24.59017394648942]
An air traffic control model is presented that guides an arbitrary number of aircraft across a three-dimensional, unstructured airspace.
Results show that the air traffic control model performs well on realistic traffic densities.
It is capable of managing the airspace by avoiding 100% of potential collisions and preventing 89.8% of potential conflicts.
arXiv Detail & Related papers (2020-07-03T10:37:25Z) - Congestion-aware Evacuation Routing using Augmented Reality Devices [96.68280427555808]
We present a congestion-aware routing solution for indoor evacuation, which produces real-time individual-customized evacuation routes among multiple destinations.
A population density map, obtained on-the-fly by aggregating locations of evacuees from user-end Augmented Reality (AR) devices, is used to model the congestion distribution inside a building.
arXiv Detail & Related papers (2020-04-25T22:54:35Z) - Autonomous UAV Navigation: A DDPG-based Deep Reinforcement Learning
Approach [1.552282932199974]
We propose an autonomous UAV path planning framework using deep reinforcement learning approach.
The objective is to employ a self-trained UAV as a flying mobile unit to reach spatially distributed moving or static targets.
arXiv Detail & Related papers (2020-03-21T19:33:00Z) - Simultaneous Navigation and Radio Mapping for Cellular-Connected UAV
with Deep Reinforcement Learning [46.55077580093577]
How to achieve ubiquitous 3D communication coverage for UAVs in the sky is a new challenge.
We propose a new coverage-aware navigation approach, which exploits the UAV's controllable mobility to design its navigation/trajectory.
We propose a new framework called simultaneous navigation and radio mapping (SNARM), where the UAV's signal measurement is used to train the deep Q network.
arXiv Detail & Related papers (2020-03-17T08:16:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.