From Static to Adaptive Defense: Federated Multi-Agent Deep Reinforcement Learning-Driven Moving Target Defense Against DoS Attacks in UAV Swarm Networks
- URL: http://arxiv.org/abs/2506.07392v2
- Date: Wed, 10 Sep 2025 03:47:56 GMT
- Title: From Static to Adaptive Defense: Federated Multi-Agent Deep Reinforcement Learning-Driven Moving Target Defense Against DoS Attacks in UAV Swarm Networks
- Authors: Yuyang Zhou, Guang Cheng, Kang Du, Zihan Chen, Tian Qin, Yuyu Zhao,
- Abstract summary: We propose a novel framework for proactive DoS mitigation in UAV swarms.<n>We design lightweight and coordinated MTD mechanisms, including leader switching, route mutation, and frequency hopping.<n>Our approach significantly outperforms state-of-the-art baselines.
- Score: 23.908450903174725
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The proliferation of UAVs has enabled a wide range of mission-critical applications and is becoming a cornerstone of low-altitude networks, supporting smart cities, emergency response, and more. However, the open wireless environment, dynamic topology, and resource constraints of UAVs expose low-altitude networks to severe DoS threats. Traditional defense approaches, which rely on fixed configurations or centralized decision-making, cannot effectively respond to the rapidly changing conditions in UAV swarm environments. To address these challenges, we propose a novel federated multi-agent deep reinforcement learning (FMADRL)-driven moving target defense (MTD) framework for proactive DoS mitigation in low-altitude networks. Specifically, we design lightweight and coordinated MTD mechanisms, including leader switching, route mutation, and frequency hopping, to disrupt attacker efforts and enhance network resilience. The defense problem is formulated as a multi-agent partially observable Markov decision process, capturing the uncertain nature of UAV swarms under attack. Each UAV is equipped with a policy agent that autonomously selects MTD actions based on partial observations and local experiences. By employing a policy gradient-based algorithm, UAVs collaboratively optimize their policies via reward-weighted aggregation. Extensive simulations demonstrate that our approach significantly outperforms state-of-the-art baselines, achieving up to a 34.6% improvement in attack mitigation rate, a reduction in average recovery time of up to 94.6%, and decreases in energy consumption and defense cost by as much as 29.3% and 98.3%, respectively, under various DoS attack strategies. These results highlight the potential of intelligent, distributed defense mechanisms to protect low-altitude networks, paving the way for reliable and scalable low-altitude economy.
Related papers
- An Efficient Privacy-preserving Intrusion Detection Scheme for UAV Swarm Networks [0.0]
Intrusion Detection System plays vital role in identifying potential security attacks to ensure the secure operation of UAV swarm networks.<n>This research aims to address these challenges by developing a novel lightweight and federated continuous learning-based IDS scheme.
arXiv Detail & Related papers (2025-11-27T22:37:06Z) - Secure Low-altitude Maritime Communications via Intelligent Jamming [53.42658269206017]
Low-altitude wireless networks (LAWNs) have emerged as a viable solution for maritime communications.<n>The open and clear UAV communication channels make maritime LAWNs vulnerable to eavesdropping attacks.<n>We propose a low-altitude maritime communication system that employs intelligent jamming to counter dynamic eavesdroppers.
arXiv Detail & Related papers (2025-11-10T03:16:19Z) - When UAV Swarm Meets IRS: Collaborative Secure Communications in Low-altitude Wireless Networks [68.45202147860537]
Low-altitude wireless networks (LAWNs) provide enhanced coverage, reliability, and throughput for diverse applications.<n>These networks face significant security vulnerabilities from both known and potential unknown eavesdroppers.<n>We propose a novel secure communication framework for LAWNs where the selected UAVs within a swarm function as a virtual antenna array.
arXiv Detail & Related papers (2025-10-25T02:02:14Z) - Integrated Communication and Control for Energy-Efficient UAV Swarms: A Multi-Agent Reinforcement Learning Approach [9.51758427865825]
We propose an integrated communication and control co-design mechanism to improve the quality of UAV swarm-assisted communications.<n>We formulate the joint resource allocation and 3D trajectory control problem as a Markov decision process (MDP)<n>We develop a multi-agent reinforcement learning framework to enable real-time coordinated actions across the UAV swarm.
arXiv Detail & Related papers (2025-09-28T14:23:04Z) - DOPA: Stealthy and Generalizable Backdoor Attacks from a Single Client under Challenging Federated Constraints [2.139012072214621]
Federated Learning (FL) is increasingly adopted for privacy-preserving collaborative training, but its decentralized nature makes it susceptible to backdoor attacks.<n>Existing attack methods, however, often rely on idealized assumptions and fail to remain effective under real-world constraints.<n>We propose DOPA, a novel framework that simulates heterogeneous local training dynamics and seeks consensus across divergent optimization trajectories to craft universally effective and stealthy backdoor triggers.
arXiv Detail & Related papers (2025-08-20T08:39:12Z) - Reinforcement Learning for Decision-Level Interception Prioritization in Drone Swarm Defense [51.736723807086385]
We present a case study demonstrating the practical advantages of reinforcement learning in addressing this challenge.<n>We introduce a high-fidelity simulation environment that captures realistic operational constraints.<n>Agent learns to coordinate multiple effectors for optimal interception prioritization.<n>We evaluate the learned policy against a handcrafted rule-based baseline across hundreds of simulated attack scenarios.
arXiv Detail & Related papers (2025-08-01T13:55:39Z) - LLM Meets the Sky: Heuristic Multi-Agent Reinforcement Learning for Secure Heterogeneous UAV Networks [57.27815890269697]
This work focuses on maximizing the secrecy rate in heterogeneous UAV networks (HetUAVNs) under energy constraints.<n>We introduce a Large Language Model (LLM)-guided multi-agent learning approach.<n>Results show that our method outperforms existing baselines in secrecy and energy efficiency.
arXiv Detail & Related papers (2025-07-23T04:22:57Z) - Defending Deep Neural Networks against Backdoor Attacks via Module Switching [15.979018992591032]
An exponential increase in the parameters of Deep Neural Networks (DNNs) has significantly raised the cost of independent training.<n>Open-source models are more vulnerable to malicious threats, such as backdoor attacks.<n>We propose a novel module-switching strategy to break such spurious correlations within the model's propagation path.
arXiv Detail & Related papers (2025-04-08T11:01:07Z) - Aerial Reliable Collaborative Communications for Terrestrial Mobile Users via Evolutionary Multi-Objective Deep Reinforcement Learning [59.660724802286865]
Unmanned aerial vehicles (UAVs) have emerged as the potential aerial base stations (BSs) to improve terrestrial communications.<n>This work employs collaborative beamforming through a UAV-enabled virtual antenna array to improve transmission performance from the UAV to terrestrial mobile users.
arXiv Detail & Related papers (2025-02-09T09:15:47Z) - Task Delay and Energy Consumption Minimization for Low-altitude MEC via Evolutionary Multi-objective Deep Reinforcement Learning [52.64813150003228]
The low-altitude economy (LAE), driven by unmanned aerial vehicles (UAVs) and other aircraft, has revolutionized fields such as transportation, agriculture, and environmental monitoring.<n>In the upcoming six-generation (6G) era, UAV-assisted mobile edge computing (MEC) is particularly crucial in challenging environments such as mountainous or disaster-stricken areas.<n>The task offloading problem is one of the key issues in UAV-assisted MEC, primarily addressing the trade-off between minimizing the task delay and the energy consumption of the UAV.
arXiv Detail & Related papers (2025-01-11T02:32:42Z) - Intercepting Unauthorized Aerial Robots in Controlled Airspace Using Reinforcement Learning [2.519319150166215]
The proliferation of unmanned aerial vehicles (UAVs) in controlled airspace presents significant risks.
This work addresses the need for robust, adaptive systems capable of managing such threats through the use of Reinforcement Learning (RL)
We present a novel approach utilizing RL to train fixed-wing UAV pursuer agents for intercepting dynamic evader targets.
arXiv Detail & Related papers (2024-07-09T14:45:47Z) - Anti-Jamming Path Planning Using GCN for Multi-UAV [0.0]
The effectiveness of UAV swarms can be severely compromised by jamming technology.
A novel approach, where UAV swarms leverage collective intelligence to predict jamming areas, is proposed.
A multi-agent control algorithm is then employed to disperse the UAV swarm, avoid jamming, and regroup upon reaching the target.
arXiv Detail & Related papers (2024-03-13T07:28:05Z) - Meta Reinforcement Learning for Strategic IoT Deployments Coverage in
Disaster-Response UAV Swarms [5.57865728456594]
Unmanned Aerial Vehicles (UAVs) have grabbed the attention of researchers in academia and industry for their potential use in critical emergency applications.
These applications include providing wireless services to ground users and collecting data from areas affected by disasters.
UAVs' limited resources, energy budget, and strict mission completion time have posed challenges in adopting UAVs for these applications.
arXiv Detail & Related papers (2024-01-20T05:05:39Z) - UAV Swarm-enabled Collaborative Secure Relay Communications with
Time-domain Colluding Eavesdropper [115.56455278813756]
Unmanned aerial vehicles (UAV) as aerial relays are practically appealing for assisting Internet Things (IoT) network.
In this work, we aim to utilize the UAV to assist secure communication between the UAV base station and terminal terminal devices.
arXiv Detail & Related papers (2023-10-03T11:47:01Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Optimization for Master-UAV-powered Auxiliary-Aerial-IRS-assisted IoT
Networks: An Option-based Multi-agent Hierarchical Deep Reinforcement
Learning Approach [56.84948632954274]
This paper investigates a master unmanned aerial vehicle (MUAV)-powered Internet of Things (IoT) network.
We propose using a rechargeable auxiliary UAV (AUAV) equipped with an intelligent reflecting surface (IRS) to enhance the communication signals from the MUAV.
Under the proposed model, we investigate the optimal collaboration strategy of these energy-limited UAVs to maximize the accumulated throughput of the IoT network.
arXiv Detail & Related papers (2021-12-20T15:45:28Z) - Distributed Reinforcement Learning for Flexible and Efficient UAV Swarm
Control [28.463670610865837]
We propose a distributed Reinforcement Learning (RL) approach that scales to larger swarms without modifications.
Our experiments show that the proposed method can yield effective strategies, which are robust to communication channel impairments.
We also show that our approach achieves better performance compared to a computationally intensive look-ahead.
arXiv Detail & Related papers (2021-03-08T11:06:28Z) - Efficient UAV Trajectory-Planning using Economic Reinforcement Learning [65.91405908268662]
We introduce REPlanner, a novel reinforcement learning algorithm inspired by economic transactions to distribute tasks between UAVs.
We formulate the path planning problem as a multi-agent economic game, where agents can cooperate and compete for resources.
As the system computes task distributions via UAV cooperation, it is highly resilient to any change in the swarm size.
arXiv Detail & Related papers (2021-03-03T20:54:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.