Federated Learning-based MARL for Strengthening Physical-Layer Security in B5G Networks
- URL: http://arxiv.org/abs/2507.06997v1
- Date: Wed, 09 Jul 2025 16:24:15 GMT
- Title: Federated Learning-based MARL for Strengthening Physical-Layer Security in B5G Networks
- Authors: Deemah H. Tashman, Soumaya Cherkaoui, Walaa Hamouda,
- Abstract summary: This paper explores the application of a federated learning-based multi-agent reinforcement learning (MARL) strategy to enhance physical-layer security (PLS) in a multi-cellular network within the context of beyond 5G networks.<n>At each cell, a base station (BS) operates as a deep reinforcement learning (DRL) agent that interacts with the surrounding environment to maximize the secrecy rate of legitimate users in the presence of an eavesdropper.
- Score: 16.57893415196489
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper explores the application of a federated learning-based multi-agent reinforcement learning (MARL) strategy to enhance physical-layer security (PLS) in a multi-cellular network within the context of beyond 5G networks. At each cell, a base station (BS) operates as a deep reinforcement learning (DRL) agent that interacts with the surrounding environment to maximize the secrecy rate of legitimate users in the presence of an eavesdropper. This eavesdropper attempts to intercept the confidential information shared between the BS and its authorized users. The DRL agents are deemed to be federated since they only share their network parameters with a central server and not the private data of their legitimate users. Two DRL approaches, deep Q-network (DQN) and Reinforce deep policy gradient (RDPG), are explored and compared. The results demonstrate that RDPG converges more rapidly than DQN. In addition, we demonstrate that the proposed method outperforms the distributed DRL approach. Furthermore, the outcomes illustrate the trade-off between security and complexity.
Related papers
- Fundamental Limits of Hierarchical Secure Aggregation with Cyclic User Association [93.46811590752814]
Hierarchical secure aggregation is motivated by federated learning (FL)<n>In this paper, we consider HSA with a cyclic association pattern where each user is connected to $B$ consecutive relays.<n>We propose an efficient aggregation scheme which includes a message design for the inputs inspired by gradient coding.
arXiv Detail & Related papers (2025-03-06T15:53:37Z) - Achieving Network Resilience through Graph Neural Network-enabled Deep Reinforcement Learning [64.20847540439318]
Deep reinforcement learning (DRL) has been widely used in many important tasks of communication networks.<n>Some studies have combined graph neural networks (GNNs) with DRL, which use the GNNs to extract unstructured features of the network.<n>This paper explores the solution of combining GNNs with DRL to build a resilient network.
arXiv Detail & Related papers (2025-01-19T15:22:17Z) - Label-free Deep Learning Driven Secure Access Selection in
Space-Air-Ground Integrated Networks [26.225658457052834]
In space-air-ground integrated networks (SAGIN), the inherent openness and extensive broadcast coverage expose these networks to significant eavesdropping threats.
It is challenging to conduct a secrecy-oriented access strategy due to both heterogeneous resources and different eavesdropping models.
We propose a Q-network approximation based deep learning approach for selecting the optimal access strategy for maximizing the sum secrecy rate.
arXiv Detail & Related papers (2023-08-28T06:48:06Z) - Federated Deep Reinforcement Learning for THz-Beam Search with Limited
CSI [17.602598143822913]
Terahertz (THz) communication with ultra-wide available spectrum is a promising technique that can achieve the stringent requirement of high data rate in the next-generation wireless networks.
Finding beam directions for a large-scale antenna array to effectively overcome severe propagation attenuation of THz signals is a pressing need.
This paper proposes a novel approach of federated deep reinforcement learning (FDRL) to swiftly perform THz-beam search for multiple base stations.
arXiv Detail & Related papers (2023-04-25T19:28:15Z) - Sparsity-Aware Intelligent Massive Random Access Control in Open RAN: A
Reinforcement Learning Based Approach [61.74489383629319]
Massive random access of devices in the emerging Open Radio Access Network (O-RAN) brings great challenge to the access control and management.
reinforcement-learning (RL)-assisted scheme of closed-loop access control is proposed to preserve sparsity of access requests.
Deep-RL-assisted SAUD is proposed to resolve highly complex environments with continuous and high-dimensional state and action spaces.
arXiv Detail & Related papers (2023-03-05T12:25:49Z) - DRL-GAN: A Hybrid Approach for Binary and Multiclass Network Intrusion
Detection [2.7122540465034106]
Intrusion detection systems (IDS) are an essential security technology for detecting these attacks.
We implement a novel hybrid technique using synthetic data produced by a Generative Adversarial Network (GAN) to use as input for training a Deep Reinforcement Learning (DRL) model.
Our findings demonstrate that training the DRL on specific synthetic datasets can result in better performance in correctly classifying minority classes over training on the true imbalanced dataset.
arXiv Detail & Related papers (2023-01-05T19:51:24Z) - Towards Quantum-Enabled 6G Slicing [0.5156484100374059]
Quantum machine learning (QML) paradigms and their synergies with network slicing can be envisioned to be a disruptive technology.
We propose a cloud-based federated learning framework based on quantum deep reinforcement learning (QDRL)
Specifically, the decision agents leverage the remold of classical deep reinforcement learning (DRL) algorithm into variational quantum circuits (VQCs) to obtain the optimal cooperative control on slice resources.
arXiv Detail & Related papers (2022-10-21T07:16:06Z) - Computation Offloading and Resource Allocation in F-RANs: A Federated
Deep Reinforcement Learning Approach [67.06539298956854]
fog radio access network (F-RAN) is a promising technology in which the user mobile devices (MDs) can offload computation tasks to the nearby fog access points (F-APs)
arXiv Detail & Related papers (2022-06-13T02:19:20Z) - Federated Deep Reinforcement Learning for the Distributed Control of
NextG Wireless Networks [16.12495409295754]
Next Generation (NextG) networks are expected to support demanding internet tactile applications such as augmented reality and connected autonomous vehicles.
Data-driven approaches can improve the ability of the network to adapt to the current operating conditions.
Deep RL (DRL) has been shown to achieve good performance even in complex environments.
arXiv Detail & Related papers (2021-12-07T03:13:20Z) - Semantic-Aware Collaborative Deep Reinforcement Learning Over Wireless
Cellular Networks [82.02891936174221]
Collaborative deep reinforcement learning (CDRL) algorithms in which multiple agents can coordinate over a wireless network is a promising approach.
In this paper, a novel semantic-aware CDRL method is proposed to enable a group of untrained agents with semantically-linked DRL tasks to collaborate efficiently across a resource-constrained wireless cellular network.
arXiv Detail & Related papers (2021-11-23T18:24:47Z) - Robust Deep Reinforcement Learning against Adversarial Perturbations on
State Observations [88.94162416324505]
A deep reinforcement learning (DRL) agent observes its states through observations, which may contain natural measurement errors or adversarial noises.
Since the observations deviate from the true states, they can mislead the agent into making suboptimal actions.
We show that naively applying existing techniques on improving robustness for classification tasks, like adversarial training, is ineffective for many RL tasks.
arXiv Detail & Related papers (2020-03-19T17:59:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.