A Deep Ensemble Multi-Agent Reinforcement Learning Approach for Air
Traffic Control
- URL: http://arxiv.org/abs/2004.01387v1
- Date: Fri, 3 Apr 2020 06:03:53 GMT
- Title: A Deep Ensemble Multi-Agent Reinforcement Learning Approach for Air
Traffic Control
- Authors: Supriyo Ghosh, Sean Laguna, Shiau Hong Lim, Laura Wynter and Hasan
Poonawala
- Abstract summary: We propose a new intelligent decision making framework that leverages multi-agent reinforcement learning (MARL) to suggest adjustments of aircraft speeds in real-time.
The goal of the system is to enhance the ability of an air traffic controller to provide effective guidance to aircraft to avoid air traffic congestion, near-miss situations, and to improve arrival timeliness.
- Score: 5.550794444001022
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Air traffic control is an example of a highly challenging operational problem
that is readily amenable to human expertise augmentation via decision support
technologies. In this paper, we propose a new intelligent decision making
framework that leverages multi-agent reinforcement learning (MARL) to
dynamically suggest adjustments of aircraft speeds in real-time. The goal of
the system is to enhance the ability of an air traffic controller to provide
effective guidance to aircraft to avoid air traffic congestion, near-miss
situations, and to improve arrival timeliness. We develop a novel deep ensemble
MARL method that can concisely capture the complexity of the air traffic
control problem by learning to efficiently arbitrate between the decisions of a
local kernel-based RL model and a wider-reaching deep MARL model. The proposed
method is trained and evaluated on an open-source air traffic management
simulator developed by Eurocontrol. Extensive empirical results on a real-world
dataset including thousands of aircraft demonstrate the feasibility of using
multi-agent RL for the problem of en-route air traffic control and show that
our proposed deep ensemble MARL method significantly outperforms three
state-of-the-art benchmark approaches.
Related papers
- Aerial Reliable Collaborative Communications for Terrestrial Mobile Users via Evolutionary Multi-Objective Deep Reinforcement Learning [59.660724802286865]
Unmanned aerial vehicles (UAVs) have emerged as the potential aerial base stations (BSs) to improve terrestrial communications.
This work employs collaborative beamforming through a UAV-enabled virtual antenna array to improve transmission performance from the UAV to terrestrial mobile users.
arXiv Detail & Related papers (2025-02-09T09:15:47Z) - Task Delay and Energy Consumption Minimization for Low-altitude MEC via Evolutionary Multi-objective Deep Reinforcement Learning [52.64813150003228]
The low-altitude economy (LAE), driven by unmanned aerial vehicles (UAVs) and other aircraft, has revolutionized fields such as transportation, agriculture, and environmental monitoring.
In the upcoming six-generation (6G) era, UAV-assisted mobile edge computing (MEC) is particularly crucial in challenging environments such as mountainous or disaster-stricken areas.
The task offloading problem is one of the key issues in UAV-assisted MEC, primarily addressing the trade-off between minimizing the task delay and the energy consumption of the UAV.
arXiv Detail & Related papers (2025-01-11T02:32:42Z) - Airport take-off and landing optimization through genetic algorithms [55.2480439325792]
This research addresses the crucial issue of pollution from aircraft operations, focusing on optimizing both gate allocation and runway scheduling simultaneously.
The study presents an innovative genetic algorithm-based method for minimizing pollution from fuel combustion during aircraft take-off and landing at airports.
arXiv Detail & Related papers (2024-02-29T14:53:55Z) - Improving Autonomous Separation Assurance through Distributed
Reinforcement Learning with Attention Networks [0.0]
We present a reinforcement learning framework to provide autonomous self-separation capabilities within AAM corridors.
The problem is formulated as a Markov Decision Process and solved by developing a novel extension to the sample-efficient, off-policy soft actor-critic (SAC) algorithm.
A comprehensive numerical study shows that the proposed framework can ensure safe and efficient separation of aircraft in high density, dynamic environments.
arXiv Detail & Related papers (2023-08-09T13:44:35Z) - Unified Automatic Control of Vehicular Systems with Reinforcement
Learning [64.63619662693068]
This article contributes a streamlined methodology for vehicular microsimulation.
It discovers high performance control strategies with minimal manual design.
The study reveals numerous emergent behaviors resembling wave mitigation, traffic signaling, and ramp metering.
arXiv Detail & Related papers (2022-07-30T16:23:45Z) - Automating the resolution of flight conflicts: Deep reinforcement
learning in service of air traffic controllers [0.0]
Dense and complex air traffic scenarios require higher levels of automation than those exhibited by tactical conflict detection and resolution (CD&R) tools that air traffic controllers (ATCO) use today.
This paper proposes using a graph convolutional reinforcement learning method operating in a multiagent setting where each agent (flight) performs a CD&R task, jointly with other agents.
We show that this method can provide high-quality solutions with respect to stakeholders interests (air traffic controllers and airspace users), addressing operational transparency issues.
arXiv Detail & Related papers (2022-06-15T09:06:58Z) - A Simplified Framework for Air Route Clustering Based on ADS-B Data [0.0]
This paper presents a framework that can support to detect the typical air routes between airports based on ADS-B data.
As a matter of fact, our framework can be taken into account to reduce practically the computational cost for air flow optimization.
arXiv Detail & Related papers (2021-07-07T08:55:31Z) - Transferable Deep Reinforcement Learning Framework for Autonomous
Vehicles with Joint Radar-Data Communications [69.24726496448713]
We propose an intelligent optimization framework based on the Markov Decision Process (MDP) to help the AV make optimal decisions.
We then develop an effective learning algorithm leveraging recent advances of deep reinforcement learning techniques to find the optimal policy for the AV.
We show that the proposed transferable deep reinforcement learning framework reduces the obstacle miss detection probability by the AV up to 67% compared to other conventional deep reinforcement learning approaches.
arXiv Detail & Related papers (2021-05-28T08:45:37Z) - Optimizing Mixed Autonomy Traffic Flow With Decentralized Autonomous
Vehicles and Multi-Agent RL [63.52264764099532]
We study the ability of autonomous vehicles to improve the throughput of a bottleneck using a fully decentralized control scheme in a mixed autonomy setting.
We apply multi-agent reinforcement algorithms to this problem and demonstrate that significant improvements in bottleneck throughput, from 20% at a 5% penetration rate to 33% at a 40% penetration rate, can be achieved.
arXiv Detail & Related papers (2020-10-30T22:06:05Z) - An Autonomous Free Airspace En-route Controller using Deep Reinforcement
Learning Techniques [24.59017394648942]
An air traffic control model is presented that guides an arbitrary number of aircraft across a three-dimensional, unstructured airspace.
Results show that the air traffic control model performs well on realistic traffic densities.
It is capable of managing the airspace by avoiding 100% of potential collisions and preventing 89.8% of potential conflicts.
arXiv Detail & Related papers (2020-07-03T10:37:25Z) - A Deep Multi-Agent Reinforcement Learning Approach to Autonomous
Separation Assurance [5.196149362684628]
A novel deep multi-agent reinforcement learning framework is proposed to identify and resolve conflicts among a variable number of aircraft.
The proposed framework is validated on three challenging case studies in the BlueSky air traffic control environment.
arXiv Detail & Related papers (2020-03-17T16:50:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.