Integrated Conflict Management for UAM with Strategic Demand Capacity
Balancing and Learning-based Tactical Deconfliction
- URL: http://arxiv.org/abs/2305.10556v1
- Date: Wed, 17 May 2023 20:23:18 GMT
- Title: Integrated Conflict Management for UAM with Strategic Demand Capacity
Balancing and Learning-based Tactical Deconfliction
- Authors: Shulu Chen, Antony Evans, Marc Brittain and Peng Wei
- Abstract summary: We propose a novel framework that combines demand capacity balancing (DCB) for strategic conflict management and reinforcement learning for tactical separation.
Our results indicate that this DCB preconditioning can allow target levels of safety to be met that are otherwise impossible.
- Score: 3.074861321741328
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Urban air mobility (UAM) has the potential to revolutionize our daily
transportation, offering rapid and efficient deliveries of passengers and cargo
between dedicated locations within and around the urban environment. Before the
commercialization and adoption of this emerging transportation mode, however,
aviation safety must be guaranteed, i.e., all the aircraft have to be safely
separated by strategic and tactical deconfliction. Reinforcement learning has
demonstrated effectiveness in the tactical deconfliction of en route commercial
air traffic in simulation. However, its performance is found to be dependent on
the traffic density. In this project, we propose a novel framework that
combines demand capacity balancing (DCB) for strategic conflict management and
reinforcement learning for tactical separation. By using DCB to precondition
traffic to proper density levels, we show that reinforcement learning can
achieve much better performance for tactical safety separation. Our results
also indicate that this DCB preconditioning can allow target levels of safety
to be met that are otherwise impossible. In addition, combining strategic DCB
with reinforcement learning for tactical separation can meet these safety
levels while achieving greater operational efficiency than alternative
solutions.
Related papers
- Mitigating Partial Observability in Adaptive Traffic Signal Control with Transformers [26.1987660654434]
Reinforcement Learning (RL) has emerged as a promising approach to enhancing adaptive traffic signal control (ATSC) systems.
This paper presents the integration of Transformer-based controllers into ATSC systems to address partial observability (PO)
The results showcase the Transformer-based model's ability to capture significant information from historical observations, leading to better control policies and improved traffic flow.
arXiv Detail & Related papers (2024-09-16T19:46:15Z) - Emerging Safety Attack and Defense in Federated Instruction Tuning of Large Language Models [51.85781332922943]
Federated learning (FL) enables multiple parties to collaboratively fine-tune an large language model (LLM) without the need of direct data sharing.
We for the first time reveal the vulnerability of safety alignment in FedIT by proposing a simple, stealthy, yet effective safety attack method.
arXiv Detail & Related papers (2024-06-15T13:24:22Z) - RACER: Epistemic Risk-Sensitive RL Enables Fast Driving with Fewer Crashes [57.319845580050924]
We propose a reinforcement learning framework that combines risk-sensitive control with an adaptive action space curriculum.
We show that our algorithm is capable of learning high-speed policies for a real-world off-road driving task.
arXiv Detail & Related papers (2024-05-07T23:32:36Z) - Learning Speed Adaptation for Flight in Clutter [3.8876619768726157]
Animals learn to adapt speed of their movements to their capabilities and the environment they observe.
Mobile robots should also demonstrate this ability to trade-off aggressiveness and safety for efficiently accomplishing tasks.
This work is to endow flight vehicles with the ability of speed adaptation in prior unknown and partially observable cluttered environments.
arXiv Detail & Related papers (2024-03-07T15:30:54Z) - Meta Reinforcement Learning for Strategic IoT Deployments Coverage in
Disaster-Response UAV Swarms [5.57865728456594]
Unmanned Aerial Vehicles (UAVs) have grabbed the attention of researchers in academia and industry for their potential use in critical emergency applications.
These applications include providing wireless services to ground users and collecting data from areas affected by disasters.
UAVs' limited resources, energy budget, and strict mission completion time have posed challenges in adopting UAVs for these applications.
arXiv Detail & Related papers (2024-01-20T05:05:39Z) - Improving Autonomous Separation Assurance through Distributed
Reinforcement Learning with Attention Networks [0.0]
We present a reinforcement learning framework to provide autonomous self-separation capabilities within AAM corridors.
The problem is formulated as a Markov Decision Process and solved by developing a novel extension to the sample-efficient, off-policy soft actor-critic (SAC) algorithm.
A comprehensive numerical study shows that the proposed framework can ensure safe and efficient separation of aircraft in high density, dynamic environments.
arXiv Detail & Related papers (2023-08-09T13:44:35Z) - Learning to Sail Dynamic Networks: The MARLIN Reinforcement Learning
Framework for Congestion Control in Tactical Environments [53.08686495706487]
This paper proposes an RL framework that leverages an accurate and parallelizable emulation environment to reenact the conditions of a tactical network.
We evaluate our RL learning framework by training a MARLIN agent in conditions replicating a bottleneck link transition between a Satellite Communication (SATCOM) and an UHF Wide Band (UHF) radio link.
arXiv Detail & Related papers (2023-06-27T16:15:15Z) - A deep reinforcement learning approach to assess the low-altitude
airspace capacity for urban air mobility [0.0]
Urban air mobility aims to provide a fast and secure way of travel by utilizing the low-altitude airspace.
Authorities are still working on the redaction of new flight rules applicable to urban air mobility.
An autonomous UAV path planning framework is proposed using a deep reinforcement learning approach and a deep deterministic policy gradient algorithm.
arXiv Detail & Related papers (2023-01-23T23:38:05Z) - Safety Correction from Baseline: Towards the Risk-aware Policy in
Robotics via Dual-agent Reinforcement Learning [64.11013095004786]
We propose a dual-agent safe reinforcement learning strategy consisting of a baseline and a safe agent.
Such a decoupled framework enables high flexibility, data efficiency and risk-awareness for RL-based control.
The proposed method outperforms the state-of-the-art safe RL algorithms on difficult robot locomotion and manipulation tasks.
arXiv Detail & Related papers (2022-12-14T03:11:25Z) - Transferable Deep Reinforcement Learning Framework for Autonomous
Vehicles with Joint Radar-Data Communications [69.24726496448713]
We propose an intelligent optimization framework based on the Markov Decision Process (MDP) to help the AV make optimal decisions.
We then develop an effective learning algorithm leveraging recent advances of deep reinforcement learning techniques to find the optimal policy for the AV.
We show that the proposed transferable deep reinforcement learning framework reduces the obstacle miss detection probability by the AV up to 67% compared to other conventional deep reinforcement learning approaches.
arXiv Detail & Related papers (2021-05-28T08:45:37Z) - Cautious Adaptation For Reinforcement Learning in Safety-Critical
Settings [129.80279257258098]
Reinforcement learning (RL) in real-world safety-critical target settings like urban driving is hazardous.
We propose a "safety-critical adaptation" task setting: an agent first trains in non-safety-critical "source" environments.
We propose a solution approach, CARL, that builds on the intuition that prior experience in diverse environments equips an agent to estimate risk.
arXiv Detail & Related papers (2020-08-15T01:40:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.