Towards a Standardized Reinforcement Learning Framework for AAM
Contingency Management
- URL: http://arxiv.org/abs/2311.10805v1
- Date: Fri, 17 Nov 2023 13:54:02 GMT
- Title: Towards a Standardized Reinforcement Learning Framework for AAM
Contingency Management
- Authors: Luis E. Alvarez, Marc W. Brittain, Kara Breeden
- Abstract summary: We develop a contingency management problem as a Markov Decision Process (MDP) and integrate it into the AAM-Gym simulation framework.
This enables rapid prototyping of reinforcement learning algorithms and evaluation of existing systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Advanced Air Mobility (AAM) is the next generation of air transportation that
includes new entrants such as electric vertical takeoff and landing (eVTOL)
aircraft, increasingly autonomous flight operations, and small UAS package
delivery. With these new vehicles and operational concepts comes a desire to
increase densities far beyond what occurs today in and around urban areas, to
utilize new battery technology, and to move toward more autonomously-piloted
aircraft. To achieve these goals, it becomes essential to introduce new safety
management system capabilities that can rapidly assess risk as it evolves
across a span of complex hazards and, if necessary, mitigate risk by executing
appropriate contingencies via supervised or automated decision-making during
flights. Recently, reinforcement learning has shown promise for real-time
decision making across a wide variety of applications including contingency
management. In this work, we formulate the contingency management problem as a
Markov Decision Process (MDP) and integrate the contingency management MDP into
the AAM-Gym simulation framework. This enables rapid prototyping of
reinforcement learning algorithms and evaluation of existing systems, thus
providing a community benchmark for future algorithm development. We report
baseline statistical information for the environment and provide example
performance metrics.
Related papers
- Aerial Reliable Collaborative Communications for Terrestrial Mobile Users via Evolutionary Multi-Objective Deep Reinforcement Learning [59.660724802286865]
Unmanned aerial vehicles (UAVs) have emerged as the potential aerial base stations (BSs) to improve terrestrial communications.
This work employs collaborative beamforming through a UAV-enabled virtual antenna array to improve transmission performance from the UAV to terrestrial mobile users.
arXiv Detail & Related papers (2025-02-09T09:15:47Z) - Task Delay and Energy Consumption Minimization for Low-altitude MEC via Evolutionary Multi-objective Deep Reinforcement Learning [52.64813150003228]
The low-altitude economy (LAE), driven by unmanned aerial vehicles (UAVs) and other aircraft, has revolutionized fields such as transportation, agriculture, and environmental monitoring.
In the upcoming six-generation (6G) era, UAV-assisted mobile edge computing (MEC) is particularly crucial in challenging environments such as mountainous or disaster-stricken areas.
The task offloading problem is one of the key issues in UAV-assisted MEC, primarily addressing the trade-off between minimizing the task delay and the energy consumption of the UAV.
arXiv Detail & Related papers (2025-01-11T02:32:42Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Tradeoffs When Considering Deep Reinforcement Learning for Contingency Management in Advanced Air Mobility [0.0]
Air transportation is undergoing a rapid evolution globally with the introduction of Advanced Air Mobility (AAM)
Increased levels of automation are likely necessary to achieve operational safety and efficiency goals.
This paper explores the use of Deep Reinforcement Learning (DRL) which has shown promising performance in complex and high-dimensional environments.
arXiv Detail & Related papers (2024-06-28T19:09:55Z) - Empowering Autonomous Driving with Large Language Models: A Safety Perspective [82.90376711290808]
This paper explores the integration of Large Language Models (LLMs) into Autonomous Driving systems.
LLMs are intelligent decision-makers in behavioral planning, augmented with a safety verifier shield for contextual safety learning.
We present two key studies in a simulated environment: an adaptive LLM-conditioned Model Predictive Control (MPC) and an LLM-enabled interactive behavior planning scheme with a state machine.
arXiv Detail & Related papers (2023-11-28T03:13:09Z) - Toward collision-free trajectory for autonomous and pilot-controlled
unmanned aerial vehicles [1.018017727755629]
This study makes greater use of electronic conspicuity (EC) information made available by PilotAware Ltd in developing an advanced collision management methodology.
The merits of the DACM methodology have been demonstrated through extensive simulations and real-world field tests in avoiding mid-air collisions.
arXiv Detail & Related papers (2023-09-18T18:24:31Z) - Improving Autonomous Separation Assurance through Distributed
Reinforcement Learning with Attention Networks [0.0]
We present a reinforcement learning framework to provide autonomous self-separation capabilities within AAM corridors.
The problem is formulated as a Markov Decision Process and solved by developing a novel extension to the sample-efficient, off-policy soft actor-critic (SAC) algorithm.
A comprehensive numerical study shows that the proposed framework can ensure safe and efficient separation of aircraft in high density, dynamic environments.
arXiv Detail & Related papers (2023-08-09T13:44:35Z) - Transferable Deep Reinforcement Learning Framework for Autonomous
Vehicles with Joint Radar-Data Communications [69.24726496448713]
We propose an intelligent optimization framework based on the Markov Decision Process (MDP) to help the AV make optimal decisions.
We then develop an effective learning algorithm leveraging recent advances of deep reinforcement learning techniques to find the optimal policy for the AV.
We show that the proposed transferable deep reinforcement learning framework reduces the obstacle miss detection probability by the AV up to 67% compared to other conventional deep reinforcement learning approaches.
arXiv Detail & Related papers (2021-05-28T08:45:37Z) - Efficient UAV Trajectory-Planning using Economic Reinforcement Learning [65.91405908268662]
We introduce REPlanner, a novel reinforcement learning algorithm inspired by economic transactions to distribute tasks between UAVs.
We formulate the path planning problem as a multi-agent economic game, where agents can cooperate and compete for resources.
As the system computes task distributions via UAV cooperation, it is highly resilient to any change in the swarm size.
arXiv Detail & Related papers (2021-03-03T20:54:19Z) - A Deep Ensemble Multi-Agent Reinforcement Learning Approach for Air
Traffic Control [5.550794444001022]
We propose a new intelligent decision making framework that leverages multi-agent reinforcement learning (MARL) to suggest adjustments of aircraft speeds in real-time.
The goal of the system is to enhance the ability of an air traffic controller to provide effective guidance to aircraft to avoid air traffic congestion, near-miss situations, and to improve arrival timeliness.
arXiv Detail & Related papers (2020-04-03T06:03:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.