Trusted Routing for Blockchain-Empowered UAV Networks via Multi-Agent Deep Reinforcement Learning
- URL: http://arxiv.org/abs/2508.00938v1
- Date: Thu, 31 Jul 2025 13:00:10 GMT
- Title: Trusted Routing for Blockchain-Empowered UAV Networks via Multi-Agent Deep Reinforcement Learning
- Authors: Ziye Jia, Sijie He, Qiuming Zhu, Wei Wang, Qihui Wu, Zhu Han,
- Abstract summary: In UAV networks, routing is vulnerable to malicious damage due to distributed topologies and high dynamics.<n>We formulate the routing problem to minimize the total delay, which is an integer linear programming and intractable to solve.<n>To tackle the network security issue, a blockchain-based trust management mechanism (BTMM) is designed to dynamically evaluate trust values and identify low-trust UAVs.
- Score: 29.81764349753088
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Due to the high flexibility and versatility, unmanned aerial vehicles (UAVs) are leveraged in various fields including surveillance and disaster rescue.However, in UAV networks, routing is vulnerable to malicious damage due to distributed topologies and high dynamics. Hence, ensuring the routing security of UAV networks is challenging. In this paper, we characterize the routing process in a time-varying UAV network with malicious nodes. Specifically, we formulate the routing problem to minimize the total delay, which is an integer linear programming and intractable to solve. Then, to tackle the network security issue, a blockchain-based trust management mechanism (BTMM) is designed to dynamically evaluate trust values and identify low-trust UAVs. To improve traditional practical Byzantine fault tolerance algorithms in the blockchain, we propose a consensus UAV update mechanism. Besides, considering the local observability, the routing problem is reformulated into a decentralized partially observable Markov decision process. Further, a multi-agent double deep Q-network based routing algorithm is designed to minimize the total delay. Finally, simulations are conducted with attacked UAVs and numerical results show that the delay of the proposed mechanism decreases by 13.39$\%$, 12.74$\%$, and 16.6$\%$ than multi-agent proximal policy optimal algorithms, multi-agent deep Q-network algorithms, and methods without BTMM, respectively.
Related papers
- LLM Meets the Sky: Heuristic Multi-Agent Reinforcement Learning for Secure Heterogeneous UAV Networks [57.27815890269697]
This work focuses on maximizing the secrecy rate in heterogeneous UAV networks (HetUAVNs) under energy constraints.<n>We introduce a Large Language Model (LLM)-guided multi-agent learning approach.<n>Results show that our method outperforms existing baselines in secrecy and energy efficiency.
arXiv Detail & Related papers (2025-07-23T04:22:57Z) - CNN+Transformer Based Anomaly Traffic Detection in UAV Networks for Emergency Rescue [12.074051347588963]
We propose a novel anomaly traffic detection architecture for UAV networks based on the software-defined networking (SDN) framework and blockchain technology.<n>An integrated algorithm combining convolutional neural networks (CNNs) and Transformer (CNN+Transformer) for anomaly traffic detection is developed, which is called CTranATD.
arXiv Detail & Related papers (2025-03-26T09:27:26Z) - Cluster-Based Multi-Agent Task Scheduling for Space-Air-Ground Integrated Networks [60.085771314013044]
Low-altitude economy holds significant potential for development in areas such as communication and sensing.<n>We propose a Clustering-based Multi-agent Deep Deterministic Policy Gradient (CMADDPG) algorithm to address the multi-UAV cooperative task scheduling challenges in SAGIN.
arXiv Detail & Related papers (2024-12-14T06:17:33Z) - Federated Learning in UAV-Enhanced Networks: Joint Coverage and
Convergence Time Optimization [16.265792031520945]
Federated learning (FL) involves several devices that collaboratively train a shared model without transferring their local data.
FL reduces the communication overhead, making it a promising learning method in UAV-enhanced wireless networks with scarce energy resources.
Despite the potential, implementing FL in UAV-enhanced networks is challenging, as conventional UAV placement methods that maximize coverage increase the FL delay.
arXiv Detail & Related papers (2023-08-31T17:50:54Z) - Routing Recovery for UAV Networks with Deliberate Attacks: A
Reinforcement Learning based Approach [23.317947964385613]
This paper focuses on the routing plan and recovery for UAV networks with attacks.
A deliberate attack model based on the importance of nodes is designed to represent enemy attacks.
An intelligent algorithm based on reinforcement learning is proposed to recover the routing path when UAVs are attacked.
arXiv Detail & Related papers (2023-08-14T07:11:55Z) - Fidelity-Guarantee Entanglement Routing in Quantum Networks [64.49733801962198]
Entanglement routing establishes remote entanglement connection between two arbitrary nodes.
We propose purification-enabled entanglement routing designs to provide fidelity guarantee for multiple Source-Destination (SD) pairs in quantum networks.
arXiv Detail & Related papers (2021-11-15T14:07:22Z) - Jamming-Resilient Path Planning for Multiple UAVs via Deep Reinforcement
Learning [1.2330326247154968]
Unmanned aerial vehicles (UAVs) are expected to be an integral part of wireless networks.
In this paper, we aim to find collision-free paths for multiple cellular-connected UAVs.
We propose an offline temporal difference (TD) learning algorithm with online signal-to-interference-plus-noise ratio mapping to solve the problem.
arXiv Detail & Related papers (2021-04-09T16:52:33Z) - Learning-Based UAV Trajectory Optimization with Collision Avoidance and
Connectivity Constraints [0.0]
Unmanned aerial vehicles (UAVs) are expected to be an integral part of wireless networks.
In this paper, we reformulate the multi-UAV trajectory optimization problem with collision avoidance and wireless connectivity constraints.
We propose a decentralized deep reinforcement learning approach to solve the problem.
arXiv Detail & Related papers (2021-04-03T22:22:20Z) - Efficient UAV Trajectory-Planning using Economic Reinforcement Learning [65.91405908268662]
We introduce REPlanner, a novel reinforcement learning algorithm inspired by economic transactions to distribute tasks between UAVs.
We formulate the path planning problem as a multi-agent economic game, where agents can cooperate and compete for resources.
As the system computes task distributions via UAV cooperation, it is highly resilient to any change in the swarm size.
arXiv Detail & Related papers (2021-03-03T20:54:19Z) - Multi-Agent Reinforcement Learning in NOMA-aided UAV Networks for
Cellular Offloading [59.32570888309133]
A novel framework is proposed for cellular offloading with the aid of multiple unmanned aerial vehicles (UAVs)
Non-orthogonal multiple access (NOMA) technique is employed at each UAV to further improve the spectrum efficiency of the wireless network.
A mutual deep Q-network (MDQN) algorithm is proposed to jointly determine the optimal 3D trajectory and power allocation of UAVs.
arXiv Detail & Related papers (2020-10-18T20:22:05Z) - NOMA in UAV-aided cellular offloading: A machine learning approach [59.32570888309133]
A novel framework is proposed for cellular offloading with the aid of multiple unmanned aerial vehicles (UAVs)
Non-orthogonal multiple access (NOMA) technique is employed at each UAV to further improve the spectrum efficiency of the wireless network.
A mutual deep Q-network (MDQN) algorithm is proposed to jointly determine the optimal 3D trajectory and power allocation of UAVs.
arXiv Detail & Related papers (2020-10-18T17:38:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.