An Autonomous Free Airspace En-route Controller using Deep Reinforcement
Learning Techniques
- URL: http://arxiv.org/abs/2007.01599v1
- Date: Fri, 3 Jul 2020 10:37:25 GMT
- Title: An Autonomous Free Airspace En-route Controller using Deep Reinforcement
Learning Techniques
- Authors: Joris Mollinga, Herke van Hoof
- Abstract summary: An air traffic control model is presented that guides an arbitrary number of aircraft across a three-dimensional, unstructured airspace.
Results show that the air traffic control model performs well on realistic traffic densities.
It is capable of managing the airspace by avoiding 100% of potential collisions and preventing 89.8% of potential conflicts.
- Score: 24.59017394648942
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Air traffic control is becoming a more and more complex task due to the
increasing number of aircraft. Current air traffic control methods are not
suitable for managing this increased traffic. Autonomous air traffic control is
deemed a promising alternative. In this paper an air traffic control model is
presented that guides an arbitrary number of aircraft across a
three-dimensional, unstructured airspace while avoiding conflicts and
collisions. This is done utilizing the power of graph based deep learning
approaches. These approaches offer significant advantages over current
approaches to this task, such as invariance to the input ordering of aircraft
and the ability to easily cope with a varying number of aircraft. Results
acquired using these approaches show that the air traffic control model
performs well on realistic traffic densities; it is capable of managing the
airspace by avoiding 100% of potential collisions and preventing 89.8% of
potential conflicts.
Related papers
- Decentralized traffic management of autonomous drones [0.3374875022248865]
We present a solution that enables self-organization of cooperating autonomous agents into an effective aerial coordination task.
We show that our algorithm is safe, efficient, and scalable regarding the number of drones and their speed range.
We experimentally demonstrate coordinated aerial traffic of 100 autonomous drones within a circular area with a radius of 125 meters.
arXiv Detail & Related papers (2023-12-18T13:52:52Z) - Toward collision-free trajectory for autonomous and pilot-controlled
unmanned aerial vehicles [1.018017727755629]
This study makes greater use of electronic conspicuity (EC) information made available by PilotAware Ltd in developing an advanced collision management methodology.
The merits of the DACM methodology have been demonstrated through extensive simulations and real-world field tests in avoiding mid-air collisions.
arXiv Detail & Related papers (2023-09-18T18:24:31Z) - Convergence of Communications, Control, and Machine Learning for Secure
and Autonomous Vehicle Navigation [78.60496411542549]
Connected and autonomous vehicles (CAVs) can reduce human errors in traffic accidents, increase road efficiency, and execute various tasks. Reaping these benefits requires CAVs to autonomously navigate to target destinations.
This article proposes solutions using the convergence of communication theory, control theory, and machine learning to enable effective and secure CAV navigation.
arXiv Detail & Related papers (2023-07-05T21:38:36Z) - A deep reinforcement learning approach to assess the low-altitude
airspace capacity for urban air mobility [0.0]
Urban air mobility aims to provide a fast and secure way of travel by utilizing the low-altitude airspace.
Authorities are still working on the redaction of new flight rules applicable to urban air mobility.
An autonomous UAV path planning framework is proposed using a deep reinforcement learning approach and a deep deterministic policy gradient algorithm.
arXiv Detail & Related papers (2023-01-23T23:38:05Z) - Reinforcement Learning based Cyberattack Model for Adaptive Traffic
Signal Controller in Connected Transportation Systems [61.39400591328625]
In a connected transportation system, adaptive traffic signal controllers (ATSC) utilize real-time vehicle trajectory data received from vehicles to regulate green time.
This wirelessly connected ATSC increases cyber-attack surfaces and increases their vulnerability to various cyber-attack modes.
One such mode is a'sybil' attack in which an attacker creates fake vehicles in the network.
An RL agent is trained to learn an optimal rate of sybil vehicle injection to create congestion for an approach(s)
arXiv Detail & Related papers (2022-10-31T20:12:17Z) - Unified Automatic Control of Vehicular Systems with Reinforcement
Learning [64.63619662693068]
This article contributes a streamlined methodology for vehicular microsimulation.
It discovers high performance control strategies with minimal manual design.
The study reveals numerous emergent behaviors resembling wave mitigation, traffic signaling, and ramp metering.
arXiv Detail & Related papers (2022-07-30T16:23:45Z) - Learning energy-efficient driving behaviors by imitating experts [75.12960180185105]
This paper examines the role of imitation learning in bridging the gap between control strategies and realistic limitations in communication and sensing.
We show that imitation learning can succeed in deriving policies that, if adopted by 5% of vehicles, may boost the energy-efficiency of networks with varying traffic conditions by 15% using only local observations.
arXiv Detail & Related papers (2022-06-28T17:08:31Z) - Neural-Fly Enables Rapid Learning for Agile Flight in Strong Winds [96.74836678572582]
We present a learning-based approach that allows rapid online adaptation by incorporating pretrained representations through deep learning.
Neural-Fly achieves precise flight control with substantially smaller tracking error than state-of-the-art nonlinear and adaptive controllers.
arXiv Detail & Related papers (2022-05-13T21:55:28Z) - Obstacle Avoidance for UAS in Continuous Action Space Using Deep
Reinforcement Learning [9.891207216312937]
Obstacle avoidance for small unmanned aircraft is vital for the safety of future urban air mobility.
We propose a deep reinforcement learning algorithm based on Proximal Policy Optimization (PPO) to guide autonomous UAS to their destinations.
Results show that the proposed model can provide accurate and robust guidance and resolve conflict with a success rate of over 99%.
arXiv Detail & Related papers (2021-11-13T04:44:53Z) - Model-Based Meta-Reinforcement Learning for Flight with Suspended
Payloads [69.21503033239985]
Transporting suspended payloads is challenging for autonomous aerial vehicles.
We propose a meta-learning approach that "learns how to learn" models of altered dynamics within seconds of post-connection flight data.
arXiv Detail & Related papers (2020-04-23T17:43:56Z) - A Deep Ensemble Multi-Agent Reinforcement Learning Approach for Air
Traffic Control [5.550794444001022]
We propose a new intelligent decision making framework that leverages multi-agent reinforcement learning (MARL) to suggest adjustments of aircraft speeds in real-time.
The goal of the system is to enhance the ability of an air traffic controller to provide effective guidance to aircraft to avoid air traffic congestion, near-miss situations, and to improve arrival timeliness.
arXiv Detail & Related papers (2020-04-03T06:03:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.