Automating the resolution of flight conflicts: Deep reinforcement
learning in service of air traffic controllers
- URL: http://arxiv.org/abs/2206.07403v1
- Date: Wed, 15 Jun 2022 09:06:58 GMT
- Title: Automating the resolution of flight conflicts: Deep reinforcement
learning in service of air traffic controllers
- Authors: George Vouros, George Papadopoulos, Alevizos Bastas, Jose Manuel
Cordero, Ruben Rodrigez Rodrigez
- Abstract summary: Dense and complex air traffic scenarios require higher levels of automation than those exhibited by tactical conflict detection and resolution (CD&R) tools that air traffic controllers (ATCO) use today.
This paper proposes using a graph convolutional reinforcement learning method operating in a multiagent setting where each agent (flight) performs a CD&R task, jointly with other agents.
We show that this method can provide high-quality solutions with respect to stakeholders interests (air traffic controllers and airspace users), addressing operational transparency issues.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Dense and complex air traffic scenarios require higher levels of automation
than those exhibited by tactical conflict detection and resolution (CD\&R)
tools that air traffic controllers (ATCO) use today. However, the air traffic
control (ATC) domain, being safety critical, requires AI systems to which
operators are comfortable to relinquishing control, guaranteeing operational
integrity and automation adoption. Two major factors towards this goal are
quality of solutions, and transparency in decision making. This paper proposes
using a graph convolutional reinforcement learning method operating in a
multiagent setting where each agent (flight) performs a CD\&R task, jointly
with other agents. We show that this method can provide high-quality solutions
with respect to stakeholders interests (air traffic controllers and airspace
users), addressing operational transparency issues.
Related papers
- Tradeoffs When Considering Deep Reinforcement Learning for Contingency Management in Advanced Air Mobility [0.0]
Air transportation is undergoing a rapid evolution globally with the introduction of Advanced Air Mobility (AAM)
Increased levels of automation are likely necessary to achieve operational safety and efficiency goals.
This paper explores the use of Deep Reinforcement Learning (DRL) which has shown promising performance in complex and high-dimensional environments.
arXiv Detail & Related papers (2024-06-28T19:09:55Z) - Towards Engineering Fair and Equitable Software Systems for Managing
Low-Altitude Airspace Authorizations [40.00051324311249]
Small Unmanned Aircraft Systems (sUAS) have gained widespread adoption across a diverse range of applications.
FAA is developing a UAS Traffic Management (UTM) system to control access to airspace based on an sUAS's predicted ability to safely complete its mission.
This paper explores stakeholders' perspectives on factors that should be considered in an automated system.
arXiv Detail & Related papers (2024-01-14T19:40:32Z) - Convergence of Communications, Control, and Machine Learning for Secure
and Autonomous Vehicle Navigation [78.60496411542549]
Connected and autonomous vehicles (CAVs) can reduce human errors in traffic accidents, increase road efficiency, and execute various tasks. Reaping these benefits requires CAVs to autonomously navigate to target destinations.
This article proposes solutions using the convergence of communication theory, control theory, and machine learning to enable effective and secure CAV navigation.
arXiv Detail & Related papers (2023-07-05T21:38:36Z) - Reinforcement Learning-Based Air Traffic Deconfliction [7.782300855058585]
This work focuses on automating the horizontal separation of two aircraft and presents the obstacle avoidance problem as a 2D surrogate optimization task.
Using Reinforcement Learning (RL), we optimize the avoidance policy and model the dynamics, interactions, and decision-making.
The proposed system generates a quick and achievable avoidance trajectory that satisfies the safety requirements.
arXiv Detail & Related papers (2023-01-05T00:37:20Z) - Learning energy-efficient driving behaviors by imitating experts [75.12960180185105]
This paper examines the role of imitation learning in bridging the gap between control strategies and realistic limitations in communication and sensing.
We show that imitation learning can succeed in deriving policies that, if adopted by 5% of vehicles, may boost the energy-efficiency of networks with varying traffic conditions by 15% using only local observations.
arXiv Detail & Related papers (2022-06-28T17:08:31Z) - Optimization for Master-UAV-powered Auxiliary-Aerial-IRS-assisted IoT
Networks: An Option-based Multi-agent Hierarchical Deep Reinforcement
Learning Approach [56.84948632954274]
This paper investigates a master unmanned aerial vehicle (MUAV)-powered Internet of Things (IoT) network.
We propose using a rechargeable auxiliary UAV (AUAV) equipped with an intelligent reflecting surface (IRS) to enhance the communication signals from the MUAV.
Under the proposed model, we investigate the optimal collaboration strategy of these energy-limited UAVs to maximize the accumulated throughput of the IoT network.
arXiv Detail & Related papers (2021-12-20T15:45:28Z) - Transferable Deep Reinforcement Learning Framework for Autonomous
Vehicles with Joint Radar-Data Communications [69.24726496448713]
We propose an intelligent optimization framework based on the Markov Decision Process (MDP) to help the AV make optimal decisions.
We then develop an effective learning algorithm leveraging recent advances of deep reinforcement learning techniques to find the optimal policy for the AV.
We show that the proposed transferable deep reinforcement learning framework reduces the obstacle miss detection probability by the AV up to 67% compared to other conventional deep reinforcement learning approaches.
arXiv Detail & Related papers (2021-05-28T08:45:37Z) - Efficient UAV Trajectory-Planning using Economic Reinforcement Learning [65.91405908268662]
We introduce REPlanner, a novel reinforcement learning algorithm inspired by economic transactions to distribute tasks between UAVs.
We formulate the path planning problem as a multi-agent economic game, where agents can cooperate and compete for resources.
As the system computes task distributions via UAV cooperation, it is highly resilient to any change in the swarm size.
arXiv Detail & Related papers (2021-03-03T20:54:19Z) - Optimizing Mixed Autonomy Traffic Flow With Decentralized Autonomous
Vehicles and Multi-Agent RL [63.52264764099532]
We study the ability of autonomous vehicles to improve the throughput of a bottleneck using a fully decentralized control scheme in a mixed autonomy setting.
We apply multi-agent reinforcement algorithms to this problem and demonstrate that significant improvements in bottleneck throughput, from 20% at a 5% penetration rate to 33% at a 40% penetration rate, can be achieved.
arXiv Detail & Related papers (2020-10-30T22:06:05Z) - A Deep Ensemble Multi-Agent Reinforcement Learning Approach for Air
Traffic Control [5.550794444001022]
We propose a new intelligent decision making framework that leverages multi-agent reinforcement learning (MARL) to suggest adjustments of aircraft speeds in real-time.
The goal of the system is to enhance the ability of an air traffic controller to provide effective guidance to aircraft to avoid air traffic congestion, near-miss situations, and to improve arrival timeliness.
arXiv Detail & Related papers (2020-04-03T06:03:53Z) - A Deep Multi-Agent Reinforcement Learning Approach to Autonomous
Separation Assurance [5.196149362684628]
A novel deep multi-agent reinforcement learning framework is proposed to identify and resolve conflicts among a variable number of aircraft.
The proposed framework is validated on three challenging case studies in the BlueSky air traffic control environment.
arXiv Detail & Related papers (2020-03-17T16:50:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.