Learning to Explain Air Traffic Situation
- URL: http://arxiv.org/abs/2502.10764v1
- Date: Sat, 15 Feb 2025 11:03:47 GMT
- Title: Learning to Explain Air Traffic Situation
- Authors: Hong-ah Chai, Seokbin Yoon, Keumjin Lee,
- Abstract summary: We propose a machine learning-based framework for explaining air traffic situations.
Specifically, we employ a Transformer-based multi-agent trajectory model that encapsulates both the Previous-temporal movement of aircraft and social interaction between them.
This provides explainable insights into how air traffic controllers perceive and understand the traffic situation.
- Score: 0.6759148939470331
- License:
- Abstract: Understanding how air traffic controllers construct a mental 'picture' of complex air traffic situations is crucial but remains a challenge due to the inherently intricate, high-dimensional interactions between aircraft, pilots, and controllers. Previous work on modeling the strategies of air traffic controllers and their mental image of traffic situations often centers on specific air traffic control tasks or pairwise interactions between aircraft, neglecting to capture the comprehensive dynamics of an air traffic situation. To address this issue, we propose a machine learning-based framework for explaining air traffic situations. Specifically, we employ a Transformer-based multi-agent trajectory model that encapsulates both the spatio-temporal movement of aircraft and social interaction between them. By deriving attention scores from the model, we can quantify the influence of individual aircraft on overall traffic dynamics. This provides explainable insights into how air traffic controllers perceive and understand the traffic situation. Trained on real-world air traffic surveillance data collected from the terminal airspace around Incheon International Airport in South Korea, our framework effectively explicates air traffic situations. This could potentially support and enhance the decision-making and situational awareness of air traffic controllers.
Related papers
- GARLIC: GPT-Augmented Reinforcement Learning with Intelligent Control for Vehicle Dispatching [81.82487256783674]
GARLIC: a framework of GPT-Augmented Reinforcement Learning with Intelligent Control for vehicle dispatching.
This paper introduces GARLIC: a framework of GPT-Augmented Reinforcement Learning with Intelligent Control for vehicle dispatching.
arXiv Detail & Related papers (2024-08-19T08:23:38Z) - Integrating spoken instructions into flight trajectory prediction to optimize automation in air traffic control [20.718663626382995]
Current air traffic control systems fail to consider spoken instructions for traffic prediction.
We present an automation paradigm integrating controlling intent into the information processing loop.
A 3-stage progressive multi-modal learning paradigm is proposed to address the modality gap between the trajectory and spoken instructions.
arXiv Detail & Related papers (2023-05-02T08:28:55Z) - Towards Cooperative Flight Control Using Visual-Attention [61.99121057062421]
We propose a vision-based air-guardian system to enable parallel autonomy between a pilot and a control system.
Our attention-based air-guardian system can balance the trade-off between its level of involvement in the flight and the pilot's expertise and attention.
arXiv Detail & Related papers (2022-12-21T15:31:47Z) - Learning energy-efficient driving behaviors by imitating experts [75.12960180185105]
This paper examines the role of imitation learning in bridging the gap between control strategies and realistic limitations in communication and sensing.
We show that imitation learning can succeed in deriving policies that, if adopted by 5% of vehicles, may boost the energy-efficiency of networks with varying traffic conditions by 15% using only local observations.
arXiv Detail & Related papers (2022-06-28T17:08:31Z) - Automating the resolution of flight conflicts: Deep reinforcement
learning in service of air traffic controllers [0.0]
Dense and complex air traffic scenarios require higher levels of automation than those exhibited by tactical conflict detection and resolution (CD&R) tools that air traffic controllers (ATCO) use today.
This paper proposes using a graph convolutional reinforcement learning method operating in a multiagent setting where each agent (flight) performs a CD&R task, jointly with other agents.
We show that this method can provide high-quality solutions with respect to stakeholders interests (air traffic controllers and airspace users), addressing operational transparency issues.
arXiv Detail & Related papers (2022-06-15T09:06:58Z) - Call-sign recognition and understanding for noisy air-traffic
transcripts using surveillance information [72.20674534231314]
Air traffic control (ATC) relies on communication via speech between pilot and air-traffic controller (ATCO)
The call-sign, as unique identifier for each flight, is used to address a specific pilot by the ATCO.
We propose a new call-sign recognition and understanding (CRU) system that addresses this issue.
The recognizer is trained to identify call-signs in noisy ATC transcripts and convert them into the standard International Civil Aviation Organization (ICAO) format.
arXiv Detail & Related papers (2022-04-13T11:30:42Z) - Wireless-Enabled Asynchronous Federated Fourier Neural Network for
Turbulence Prediction in Urban Air Mobility (UAM) [101.80862265018033]
Urban air mobility (UAM) has been proposed in which vertical takeoff and landing (VTOL) aircraft are used to provide a ride-hailing service.
In UAM, aircraft can operate in designated air spaces known as corridors, that link the aerodromes.
A reliable communication network between GBSs and aircraft enables UAM to adequately utilize the airspace.
arXiv Detail & Related papers (2021-12-26T14:41:52Z) - A Simplified Framework for Air Route Clustering Based on ADS-B Data [0.0]
This paper presents a framework that can support to detect the typical air routes between airports based on ADS-B data.
As a matter of fact, our framework can be taken into account to reduce practically the computational cost for air flow optimization.
arXiv Detail & Related papers (2021-07-07T08:55:31Z) - An Autonomous Free Airspace En-route Controller using Deep Reinforcement
Learning Techniques [24.59017394648942]
An air traffic control model is presented that guides an arbitrary number of aircraft across a three-dimensional, unstructured airspace.
Results show that the air traffic control model performs well on realistic traffic densities.
It is capable of managing the airspace by avoiding 100% of potential collisions and preventing 89.8% of potential conflicts.
arXiv Detail & Related papers (2020-07-03T10:37:25Z) - Model-Based Meta-Reinforcement Learning for Flight with Suspended
Payloads [69.21503033239985]
Transporting suspended payloads is challenging for autonomous aerial vehicles.
We propose a meta-learning approach that "learns how to learn" models of altered dynamics within seconds of post-connection flight data.
arXiv Detail & Related papers (2020-04-23T17:43:56Z) - A Deep Ensemble Multi-Agent Reinforcement Learning Approach for Air
Traffic Control [5.550794444001022]
We propose a new intelligent decision making framework that leverages multi-agent reinforcement learning (MARL) to suggest adjustments of aircraft speeds in real-time.
The goal of the system is to enhance the ability of an air traffic controller to provide effective guidance to aircraft to avoid air traffic congestion, near-miss situations, and to improve arrival timeliness.
arXiv Detail & Related papers (2020-04-03T06:03:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.