Integrated Communication and Control for Energy-Efficient UAV Swarms: A Multi-Agent Reinforcement Learning Approach
- URL: http://arxiv.org/abs/2509.23905v1
- Date: Sun, 28 Sep 2025 14:23:04 GMT
- Title: Integrated Communication and Control for Energy-Efficient UAV Swarms: A Multi-Agent Reinforcement Learning Approach
- Authors: Tianjiao Sun, Ningyan Guo, Haozhe Gu, Yanyan Peng, Zhiyong Feng,
- Abstract summary: We propose an integrated communication and control co-design mechanism to improve the quality of UAV swarm-assisted communications.<n>We formulate the joint resource allocation and 3D trajectory control problem as a Markov decision process (MDP)<n>We develop a multi-agent reinforcement learning framework to enable real-time coordinated actions across the UAV swarm.
- Score: 9.51758427865825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The deployment of unmanned aerial vehicle (UAV) swarm-assisted communication networks has become an increasingly vital approach for remediating coverage limitations in infrastructure-deficient environments, with especially pressing applications in temporary scenarios, such as emergency rescue, military and security operations, and remote area coverage. However, complex geographic environments lead to unpredictable and highly dynamic wireless channel conditions, resulting in frequent interruptions of air-to-ground (A2G) links that severely constrain the reliability and quality of service in UAV swarm-assisted mobile communications. To improve the quality of UAV swarm-assisted communications in complex geographic environments, we propose an integrated communication and control co-design mechanism. Given the stringent energy constraints inherent in UAV swarms, our proposed mechanism is designed to optimize energy efficiency while maintaining an equilibrium between equitable communication rates for mobile ground users (GUs) and UAV energy expenditure. We formulate the joint resource allocation and 3D trajectory control problem as a Markov decision process (MDP), and develop a multi-agent reinforcement learning (MARL) framework to enable real-time coordinated actions across the UAV swarm. To optimize the action policy of UAV swarms, we propose a novel multi-agent hybrid proximal policy optimization with action masking (MAHPPO-AM) algorithm, specifically designed to handle complex hybrid action spaces. The algorithm incorporates action masking to enforce hard constraints in high-dimensional action spaces. Experimental results demonstrate that our approach achieves a fairness index of 0.99 while reducing energy consumption by up to 25% compared to baseline methods.
Related papers
- Hierarchical Task Offloading and Trajectory Optimization in Low-Altitude Intelligent Networks Via Auction and Diffusion-based MARL [37.79695337425523]
Low-altitude intelligent networks (LAINs) can support mission-critical applications such as disaster response, environmental monitoring, and real-time sensing.<n>These systems face key challenges, including energy-constrained UAVs, task arrivals, and heterogeneous computing resources.<n>We propose an integrated air-ground collaborative network and formulate a time-dependent integer nonlinear programming problem that jointly optimize UAV trajectory planning and task offloading decisions.
arXiv Detail & Related papers (2025-12-05T08:14:45Z) - Secure Low-altitude Maritime Communications via Intelligent Jamming [53.42658269206017]
Low-altitude wireless networks (LAWNs) have emerged as a viable solution for maritime communications.<n>The open and clear UAV communication channels make maritime LAWNs vulnerable to eavesdropping attacks.<n>We propose a low-altitude maritime communication system that employs intelligent jamming to counter dynamic eavesdroppers.
arXiv Detail & Related papers (2025-11-10T03:16:19Z) - When UAV Swarm Meets IRS: Collaborative Secure Communications in Low-altitude Wireless Networks [68.45202147860537]
Low-altitude wireless networks (LAWNs) provide enhanced coverage, reliability, and throughput for diverse applications.<n>These networks face significant security vulnerabilities from both known and potential unknown eavesdroppers.<n>We propose a novel secure communication framework for LAWNs where the selected UAVs within a swarm function as a virtual antenna array.
arXiv Detail & Related papers (2025-10-25T02:02:14Z) - LLM Meets the Sky: Heuristic Multi-Agent Reinforcement Learning for Secure Heterogeneous UAV Networks [57.27815890269697]
This work focuses on maximizing the secrecy rate in heterogeneous UAV networks (HetUAVNs) under energy constraints.<n>We introduce a Large Language Model (LLM)-guided multi-agent learning approach.<n>Results show that our method outperforms existing baselines in secrecy and energy efficiency.
arXiv Detail & Related papers (2025-07-23T04:22:57Z) - Age of Information Minimization in UAV-Enabled Integrated Sensing and Communication Systems [34.92822911897626]
Unmanned aerial vehicles (UAVs) equipped with integrated sensing and communication (ISAC) capabilities are envisioned to play a pivotal role in future wireless networks.<n>We propose Age Information (AoI) system that simultaneously performs target sensing and multi-user communication.
arXiv Detail & Related papers (2025-07-18T18:17:09Z) - From Static to Adaptive Defense: Federated Multi-Agent Deep Reinforcement Learning-Driven Moving Target Defense Against DoS Attacks in UAV Swarm Networks [23.908450903174725]
We propose a novel framework for proactive DoS mitigation in UAV swarms.<n>We design lightweight and coordinated MTD mechanisms, including leader switching, route mutation, and frequency hopping.<n>Our approach significantly outperforms state-of-the-art baselines.
arXiv Detail & Related papers (2025-06-09T03:33:04Z) - Aerial Reliable Collaborative Communications for Terrestrial Mobile Users via Evolutionary Multi-Objective Deep Reinforcement Learning [59.660724802286865]
Unmanned aerial vehicles (UAVs) have emerged as the potential aerial base stations (BSs) to improve terrestrial communications.<n>This work employs collaborative beamforming through a UAV-enabled virtual antenna array to improve transmission performance from the UAV to terrestrial mobile users.
arXiv Detail & Related papers (2025-02-09T09:15:47Z) - Low-altitude Friendly-Jamming for Satellite-Maritime Communications via Generative AI-enabled Deep Reinforcement Learning [72.72954660774002]
Low Earth Orbit (LEO) satellites can be used to assist maritime wireless communications for data transmission across wide-ranging areas.<n>Extensive coverage of LEO satellites, combined with openness of channels, can cause the communication process to suffer from security risks.<n>This paper presents a low-altitude friendly-jamming LEO satellite-maritime communication system enabled by a unmanned aerial vehicle.
arXiv Detail & Related papers (2025-01-26T10:13:51Z) - Reinforcement Learning for Enhancing Sensing Estimation in Bistatic ISAC Systems with UAV Swarms [4.387337528923525]
This paper introduces a novel Multi-Agent Reinforcement Learning (MARL) framework to enhance integrated sensing and communication networks.<n>By framing the positioning and trajectory optimization of UAVs as a Partially Observable Markov Decision Process, we develop a MARL approach.<n>We implement a decentralized cooperative MARL strategy to enable UAVs to develop effective communication protocols.
arXiv Detail & Related papers (2025-01-11T06:57:52Z) - Task Delay and Energy Consumption Minimization for Low-altitude MEC via Evolutionary Multi-objective Deep Reinforcement Learning [52.64813150003228]
The low-altitude economy (LAE), driven by unmanned aerial vehicles (UAVs) and other aircraft, has revolutionized fields such as transportation, agriculture, and environmental monitoring.<n>In the upcoming six-generation (6G) era, UAV-assisted mobile edge computing (MEC) is particularly crucial in challenging environments such as mountainous or disaster-stricken areas.<n>The task offloading problem is one of the key issues in UAV-assisted MEC, primarily addressing the trade-off between minimizing the task delay and the energy consumption of the UAV.
arXiv Detail & Related papers (2025-01-11T02:32:42Z) - UAV-enabled Collaborative Beamforming via Multi-Agent Deep Reinforcement Learning [79.16150966434299]
We formulate a UAV-enabled collaborative beamforming multi-objective optimization problem (UCBMOP) to maximize the transmission rate of the UVAA and minimize the energy consumption of all UAVs.
We use the heterogeneous-agent trust region policy optimization (HATRPO) as the basic framework, and then propose an improved HATRPO algorithm, namely HATRPO-UCB.
arXiv Detail & Related papers (2024-04-11T03:19:22Z) - Meta Reinforcement Learning for Strategic IoT Deployments Coverage in
Disaster-Response UAV Swarms [5.57865728456594]
Unmanned Aerial Vehicles (UAVs) have grabbed the attention of researchers in academia and industry for their potential use in critical emergency applications.
These applications include providing wireless services to ground users and collecting data from areas affected by disasters.
UAVs' limited resources, energy budget, and strict mission completion time have posed challenges in adopting UAVs for these applications.
arXiv Detail & Related papers (2024-01-20T05:05:39Z) - UAV Swarm-enabled Collaborative Secure Relay Communications with
Time-domain Colluding Eavesdropper [115.56455278813756]
Unmanned aerial vehicles (UAV) as aerial relays are practically appealing for assisting Internet Things (IoT) network.
In this work, we aim to utilize the UAV to assist secure communication between the UAV base station and terminal terminal devices.
arXiv Detail & Related papers (2023-10-03T11:47:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.