A Reinforcement Learning Approach to Quiet and Safe UAM Traffic Management
- URL: http://arxiv.org/abs/2501.08941v1
- Date: Wed, 15 Jan 2025 16:44:35 GMT
- Title: A Reinforcement Learning Approach to Quiet and Safe UAM Traffic Management
- Authors: Surya Murthy, John-Paul Clarke, Ufuk Topcu, Zhenyu Gao,
- Abstract summary: Urban air mobility (UAM) is a transformative system that operates various small aerial vehicles in urban environments.
Recent analyses of UAM's operational constraints highlight aircraft noise and system safety as key hurdles to UAM system implementation.
We propose a multi-agent reinforcement learning approach to manage UAM traffic, aiming at both vertical separation assurance and noise mitigation.
- Score: 17.974572464176394
- License:
- Abstract: Urban air mobility (UAM) is a transformative system that operates various small aerial vehicles in urban environments to reshape urban transportation. However, integrating UAM into existing urban environments presents a variety of complex challenges. Recent analyses of UAM's operational constraints highlight aircraft noise and system safety as key hurdles to UAM system implementation. Future UAM air traffic management schemes must ensure that the system is both quiet and safe. We propose a multi-agent reinforcement learning approach to manage UAM traffic, aiming at both vertical separation assurance and noise mitigation. Through extensive training, the reinforcement learning agent learns to balance the two primary objectives by employing altitude adjustments in a multi-layer UAM network. The results reveal the tradeoffs among noise impact, traffic congestion, and separation. Overall, our findings demonstrate the potential of reinforcement learning in mitigating UAM's noise impact while maintaining safe separation using altitude adjustments
Related papers
- Toward Safe Integration of UAM in Terminal Airspace: UAM Route Feasibility Assessment using Probabilistic Aircraft Trajectory Prediction [0.0]
This study proposes a framework to assess the feasibility of Urban Air Mobility (UAM) route integration using probabilistic aircraft trajectory prediction.
The methodology was applied to airspace over Seoul metropolitan area, encompassing interactions between UAM and conventional traffic at multiple altitudes and lanes.
arXiv Detail & Related papers (2025-01-28T00:28:16Z) - Low-altitude Friendly-Jamming for Satellite-Maritime Communications via Generative AI-enabled Deep Reinforcement Learning [72.72954660774002]
Low Earth Orbit (LEO) satellites can be used to assist maritime wireless communications for data transmission across wide-ranging areas.
Extensive coverage of LEO satellites, combined with openness of channels, can cause the communication process to suffer from security risks.
This paper presents a low-altitude friendly-jamming LEO satellite-maritime communication system enabled by a unmanned aerial vehicle.
arXiv Detail & Related papers (2025-01-26T10:13:51Z) - Task Delay and Energy Consumption Minimization for Low-altitude MEC via Evolutionary Multi-objective Deep Reinforcement Learning [52.64813150003228]
The low-altitude economy (LAE), driven by unmanned aerial vehicles (UAVs) and other aircraft, has revolutionized fields such as transportation, agriculture, and environmental monitoring.
In the upcoming six-generation (6G) era, UAV-assisted mobile edge computing (MEC) is particularly crucial in challenging environments such as mountainous or disaster-stricken areas.
The task offloading problem is one of the key issues in UAV-assisted MEC, primarily addressing the trade-off between minimizing the task delay and the energy consumption of the UAV.
arXiv Detail & Related papers (2025-01-11T02:32:42Z) - Routing and Scheduling Optimization for Urban Air Mobility Fleet Management using Quantum Annealing [1.2145532233226684]
Efficiently managing the anticipated high-density air traffic in cities is critical to ensure safe and effective operations.
We propose a routing and scheduling framework to address the needs of a large fleet of UAM vehicles operating in urban areas.
Our method is validated using a traffic management simulator tailored for the airspace in Singapore.
arXiv Detail & Related papers (2024-10-15T03:27:52Z) - Towards a Standardized Reinforcement Learning Framework for AAM
Contingency Management [0.0]
We develop a contingency management problem as a Markov Decision Process (MDP) and integrate it into the AAM-Gym simulation framework.
This enables rapid prototyping of reinforcement learning algorithms and evaluation of existing systems.
arXiv Detail & Related papers (2023-11-17T13:54:02Z) - UAV Swarm-enabled Collaborative Secure Relay Communications with
Time-domain Colluding Eavesdropper [115.56455278813756]
Unmanned aerial vehicles (UAV) as aerial relays are practically appealing for assisting Internet Things (IoT) network.
In this work, we aim to utilize the UAV to assist secure communication between the UAV base station and terminal terminal devices.
arXiv Detail & Related papers (2023-10-03T11:47:01Z) - Integrated Conflict Management for UAM with Strategic Demand Capacity
Balancing and Learning-based Tactical Deconfliction [3.074861321741328]
We propose a novel framework that combines demand capacity balancing (DCB) for strategic conflict management and reinforcement learning for tactical separation.
Our results indicate that this DCB preconditioning can allow target levels of safety to be met that are otherwise impossible.
arXiv Detail & Related papers (2023-05-17T20:23:18Z) - Wireless-Enabled Asynchronous Federated Fourier Neural Network for
Turbulence Prediction in Urban Air Mobility (UAM) [101.80862265018033]
Urban air mobility (UAM) has been proposed in which vertical takeoff and landing (VTOL) aircraft are used to provide a ride-hailing service.
In UAM, aircraft can operate in designated air spaces known as corridors, that link the aerodromes.
A reliable communication network between GBSs and aircraft enables UAM to adequately utilize the airspace.
arXiv Detail & Related papers (2021-12-26T14:41:52Z) - Obstacle Avoidance for UAS in Continuous Action Space Using Deep
Reinforcement Learning [9.891207216312937]
Obstacle avoidance for small unmanned aircraft is vital for the safety of future urban air mobility.
We propose a deep reinforcement learning algorithm based on Proximal Policy Optimization (PPO) to guide autonomous UAS to their destinations.
Results show that the proposed model can provide accurate and robust guidance and resolve conflict with a success rate of over 99%.
arXiv Detail & Related papers (2021-11-13T04:44:53Z) - Transferable Deep Reinforcement Learning Framework for Autonomous
Vehicles with Joint Radar-Data Communications [69.24726496448713]
We propose an intelligent optimization framework based on the Markov Decision Process (MDP) to help the AV make optimal decisions.
We then develop an effective learning algorithm leveraging recent advances of deep reinforcement learning techniques to find the optimal policy for the AV.
We show that the proposed transferable deep reinforcement learning framework reduces the obstacle miss detection probability by the AV up to 67% compared to other conventional deep reinforcement learning approaches.
arXiv Detail & Related papers (2021-05-28T08:45:37Z) - Efficient UAV Trajectory-Planning using Economic Reinforcement Learning [65.91405908268662]
We introduce REPlanner, a novel reinforcement learning algorithm inspired by economic transactions to distribute tasks between UAVs.
We formulate the path planning problem as a multi-agent economic game, where agents can cooperate and compete for resources.
As the system computes task distributions via UAV cooperation, it is highly resilient to any change in the swarm size.
arXiv Detail & Related papers (2021-03-03T20:54:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.