Coordinated Strategies in Realistic Air Combat by Hierarchical Multi-Agent Reinforcement Learning
- URL: http://arxiv.org/abs/2510.11474v2
- Date: Wed, 22 Oct 2025 08:38:26 GMT
- Title: Coordinated Strategies in Realistic Air Combat by Hierarchical Multi-Agent Reinforcement Learning
- Authors: Ardian Selmonaj, Giacomo Del Rio, Adrian Schneider, Alessandro Antonucci,
- Abstract summary: We introduce a novel 3D multi-agent air combat environment and a Hierarchical Multi-Agent Reinforcement Learning framework to tackle these challenges.<n>Our approach combines heterogeneous agent dynamics, curriculum learning, league-play, and a newly adapted training algorithm.<n> Empirical results show that our hierarchical approach improves both learning efficiency and combat performance in complex dogfight scenarios.
- Score: 39.38793354038274
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Achieving mission objectives in a realistic simulation of aerial combat is highly challenging due to imperfect situational awareness and nonlinear flight dynamics. In this work, we introduce a novel 3D multi-agent air combat environment and a Hierarchical Multi-Agent Reinforcement Learning framework to tackle these challenges. Our approach combines heterogeneous agent dynamics, curriculum learning, league-play, and a newly adapted training algorithm. To this end, the decision-making process is organized into two abstraction levels: low-level policies learn precise control maneuvers, while high-level policies issue tactical commands based on mission objectives. Empirical results show that our hierarchical approach improves both learning efficiency and combat performance in complex dogfight scenarios.
Related papers
- Reinforcement Learning for Decision-Level Interception Prioritization in Drone Swarm Defense [51.736723807086385]
We present a case study demonstrating the practical advantages of reinforcement learning in addressing this challenge.<n>We introduce a high-fidelity simulation environment that captures realistic operational constraints.<n>Agent learns to coordinate multiple effectors for optimal interception prioritization.<n>We evaluate the learned policy against a handcrafted rule-based baseline across hundreds of simulated attack scenarios.
arXiv Detail & Related papers (2025-08-01T13:55:39Z) - Enhancing Aerial Combat Tactics through Hierarchical Multi-Agent Reinforcement Learning [38.15185397658309]
This work presents a Hierarchical Multi-Agent Reinforcement Learning framework for analyzing simulated air combat scenarios.<n>The objective is to identify effective Courses of Action that lead to mission success within preset simulations.
arXiv Detail & Related papers (2025-05-13T22:13:48Z) - Dynamic Obstacle Avoidance with Bounded Rationality Adversarial Reinforcement Learning [5.760394464143113]
We propose a novel way to endow navigation policies with robustness by a training process that models obstacles as adversarial agents.<n>We call this method versa policies via Quantal response Adrial Reinforcement Learning (Hi-QARL)
arXiv Detail & Related papers (2025-03-14T14:54:02Z) - A Hierarchical Reinforcement Learning Framework for Multi-UAV Combat Using Leader-Follower Strategy [3.095786524987445]
Multi-UAV air combat is a complex task involving multiple autonomous UAVs.<n>Previous approaches predominantly discretize the action space into predefined actions.<n>We propose a hierarchical framework utilizing the Leader-Follower Multi-Agent Proximal Policy Optimization strategy.
arXiv Detail & Related papers (2025-01-22T02:41:36Z) - Efficient Adaptation in Mixed-Motive Environments via Hierarchical Opponent Modeling and Planning [51.52387511006586]
We propose Hierarchical Opponent modeling and Planning (HOP), a novel multi-agent decision-making algorithm.
HOP is hierarchically composed of two modules: an opponent modeling module that infers others' goals and learns corresponding goal-conditioned policies.
HOP exhibits superior few-shot adaptation capabilities when interacting with various unseen agents, and excels in self-play scenarios.
arXiv Detail & Related papers (2024-06-12T08:48:06Z) - Hierarchical Multi-Agent Reinforcement Learning for Air Combat
Maneuvering [40.06500618820166]
We propose a hierarchical multi-agent reinforcement learning framework for air-to-air combat with multiple heterogeneous agents.
Low-level policies are trained for accurate unit combat control. The commander policy is trained on mission targets given pre-trained low-level policies.
arXiv Detail & Related papers (2023-09-20T12:16:00Z) - Coach-assisted Multi-Agent Reinforcement Learning Framework for
Unexpected Crashed Agents [120.91291581594773]
We present a formal formulation of a cooperative multi-agent reinforcement learning system with unexpected crashes.
We propose a coach-assisted multi-agent reinforcement learning framework, which introduces a virtual coach agent to adjust the crash rate during training.
To the best of our knowledge, this work is the first to study the unexpected crashes in the multi-agent system.
arXiv Detail & Related papers (2022-03-16T08:22:45Z) - ReLMoGen: Leveraging Motion Generation in Reinforcement Learning for
Mobile Manipulation [99.2543521972137]
ReLMoGen is a framework that combines a learned policy to predict subgoals and a motion generator to plan and execute the motion needed to reach these subgoals.
Our method is benchmarked on a diverse set of seven robotics tasks in photo-realistic simulation environments.
ReLMoGen shows outstanding transferability between different motion generators at test time, indicating a great potential to transfer to real robots.
arXiv Detail & Related papers (2020-08-18T08:05:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.