Towards a Standardized Reinforcement Learning Framework for AAM
Contingency Management
- URL: http://arxiv.org/abs/2311.10805v1
- Date: Fri, 17 Nov 2023 13:54:02 GMT
- Title: Towards a Standardized Reinforcement Learning Framework for AAM
Contingency Management
- Authors: Luis E. Alvarez, Marc W. Brittain, Kara Breeden
- Abstract summary: We develop a contingency management problem as a Markov Decision Process (MDP) and integrate it into the AAM-Gym simulation framework.
This enables rapid prototyping of reinforcement learning algorithms and evaluation of existing systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Advanced Air Mobility (AAM) is the next generation of air transportation that
includes new entrants such as electric vertical takeoff and landing (eVTOL)
aircraft, increasingly autonomous flight operations, and small UAS package
delivery. With these new vehicles and operational concepts comes a desire to
increase densities far beyond what occurs today in and around urban areas, to
utilize new battery technology, and to move toward more autonomously-piloted
aircraft. To achieve these goals, it becomes essential to introduce new safety
management system capabilities that can rapidly assess risk as it evolves
across a span of complex hazards and, if necessary, mitigate risk by executing
appropriate contingencies via supervised or automated decision-making during
flights. Recently, reinforcement learning has shown promise for real-time
decision making across a wide variety of applications including contingency
management. In this work, we formulate the contingency management problem as a
Markov Decision Process (MDP) and integrate the contingency management MDP into
the AAM-Gym simulation framework. This enables rapid prototyping of
reinforcement learning algorithms and evaluation of existing systems, thus
providing a community benchmark for future algorithm development. We report
baseline statistical information for the environment and provide example
performance metrics.
Related papers
- Deep progressive reinforcement learning-based flexible resource scheduling framework for IRS and UAV-assisted MEC system [22.789916304113476]
Unmanned aerial vehicle (UAV)-assisted mobile edge computing system is widely used in temporary and emergency scenarios.
Our goal is to minimize the energy consumption of the MEC system by jointly optimizing UAV locations, IRS phase shift, task offloading, and resource allocation with a variable number of UAVs.
arXiv Detail & Related papers (2024-08-02T13:10:33Z) - Tradeoffs When Considering Deep Reinforcement Learning for Contingency Management in Advanced Air Mobility [0.0]
Air transportation is undergoing a rapid evolution globally with the introduction of Advanced Air Mobility (AAM)
Increased levels of automation are likely necessary to achieve operational safety and efficiency goals.
This paper explores the use of Deep Reinforcement Learning (DRL) which has shown promising performance in complex and high-dimensional environments.
arXiv Detail & Related papers (2024-06-28T19:09:55Z) - Airport take-off and landing optimization through genetic algorithms [55.2480439325792]
This research addresses the crucial issue of pollution from aircraft operations, focusing on optimizing both gate allocation and runway scheduling simultaneously.
The study presents an innovative genetic algorithm-based method for minimizing pollution from fuel combustion during aircraft take-off and landing at airports.
arXiv Detail & Related papers (2024-02-29T14:53:55Z) - Empowering Autonomous Driving with Large Language Models: A Safety Perspective [82.90376711290808]
This paper explores the integration of Large Language Models (LLMs) into Autonomous Driving systems.
LLMs are intelligent decision-makers in behavioral planning, augmented with a safety verifier shield for contextual safety learning.
We present two key studies in a simulated environment: an adaptive LLM-conditioned Model Predictive Control (MPC) and an LLM-enabled interactive behavior planning scheme with a state machine.
arXiv Detail & Related papers (2023-11-28T03:13:09Z) - Toward collision-free trajectory for autonomous and pilot-controlled
unmanned aerial vehicles [1.018017727755629]
This study makes greater use of electronic conspicuity (EC) information made available by PilotAware Ltd in developing an advanced collision management methodology.
The merits of the DACM methodology have been demonstrated through extensive simulations and real-world field tests in avoiding mid-air collisions.
arXiv Detail & Related papers (2023-09-18T18:24:31Z) - Improving Autonomous Separation Assurance through Distributed
Reinforcement Learning with Attention Networks [0.0]
We present a reinforcement learning framework to provide autonomous self-separation capabilities within AAM corridors.
The problem is formulated as a Markov Decision Process and solved by developing a novel extension to the sample-efficient, off-policy soft actor-critic (SAC) algorithm.
A comprehensive numerical study shows that the proposed framework can ensure safe and efficient separation of aircraft in high density, dynamic environments.
arXiv Detail & Related papers (2023-08-09T13:44:35Z) - Artificial Intelligence Empowered Multiple Access for Ultra Reliable and
Low Latency THz Wireless Networks [76.89730672544216]
Terahertz (THz) wireless networks are expected to catalyze the beyond fifth generation (B5G) era.
To satisfy the ultra-reliability and low-latency demands of several B5G applications, novel mobility management approaches are required.
This article presents a holistic MAC layer approach that enables intelligent user association and resource allocation, as well as flexible and adaptive mobility management.
arXiv Detail & Related papers (2022-08-17T03:00:24Z) - Transferable Deep Reinforcement Learning Framework for Autonomous
Vehicles with Joint Radar-Data Communications [69.24726496448713]
We propose an intelligent optimization framework based on the Markov Decision Process (MDP) to help the AV make optimal decisions.
We then develop an effective learning algorithm leveraging recent advances of deep reinforcement learning techniques to find the optimal policy for the AV.
We show that the proposed transferable deep reinforcement learning framework reduces the obstacle miss detection probability by the AV up to 67% compared to other conventional deep reinforcement learning approaches.
arXiv Detail & Related papers (2021-05-28T08:45:37Z) - Efficient UAV Trajectory-Planning using Economic Reinforcement Learning [65.91405908268662]
We introduce REPlanner, a novel reinforcement learning algorithm inspired by economic transactions to distribute tasks between UAVs.
We formulate the path planning problem as a multi-agent economic game, where agents can cooperate and compete for resources.
As the system computes task distributions via UAV cooperation, it is highly resilient to any change in the swarm size.
arXiv Detail & Related papers (2021-03-03T20:54:19Z) - Cautious Adaptation For Reinforcement Learning in Safety-Critical
Settings [129.80279257258098]
Reinforcement learning (RL) in real-world safety-critical target settings like urban driving is hazardous.
We propose a "safety-critical adaptation" task setting: an agent first trains in non-safety-critical "source" environments.
We propose a solution approach, CARL, that builds on the intuition that prior experience in diverse environments equips an agent to estimate risk.
arXiv Detail & Related papers (2020-08-15T01:40:59Z) - A Deep Ensemble Multi-Agent Reinforcement Learning Approach for Air
Traffic Control [5.550794444001022]
We propose a new intelligent decision making framework that leverages multi-agent reinforcement learning (MARL) to suggest adjustments of aircraft speeds in real-time.
The goal of the system is to enhance the ability of an air traffic controller to provide effective guidance to aircraft to avoid air traffic congestion, near-miss situations, and to improve arrival timeliness.
arXiv Detail & Related papers (2020-04-03T06:03:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.