Online Vertical Federated Learning for Cooperative Spectrum Sensing
- URL: http://arxiv.org/abs/2312.11363v1
- Date: Mon, 18 Dec 2023 17:19:53 GMT
- Title: Online Vertical Federated Learning for Cooperative Spectrum Sensing
- Authors: Heqiang Wang, Jie Xu
- Abstract summary: Online vertical federated learning (OVFL) is designed to address the challenges of ongoing data stream and shifting learning goals.
OVFL achieves a sublinear regret bound, thereby evidencing its efficiency.
- Score: 8.081617656116139
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The increasing demand for wireless communication underscores the need to
optimize radio frequency spectrum utilization. An effective strategy for
leveraging underutilized licensed frequency bands is cooperative spectrum
sensing (CSS), which enable multiple secondary users (SUs) to collaboratively
detect the spectrum usage of primary users (PUs) prior to accessing the
licensed spectrum. The increasing popularity of machine learning has led to a
shift from traditional CSS methods to those based on deep learning. However,
deep learning-based CSS methods often rely on centralized learning, posing
challenges like communication overhead and data privacy risks. Recent research
suggests vertical federated learning (VFL) as a potential solution, with its
core concept centered on partitioning the deep neural network into distinct
segments, with each segment is trained separately. However, existing VFL-based
CSS works do not fully address the practical challenges arising from streaming
data and the objective shift. In this work, we introduce online vertical
federated learning (OVFL), a robust framework designed to address the
challenges of ongoing data stream and shifting learning goals. Our theoretical
analysis reveals that OVFL achieves a sublinear regret bound, thereby
evidencing its efficiency. Empirical results from our experiments show that
OVFL outperforms benchmarks in CSS tasks. We also explore the impact of various
parameters on the learning performance.
Related papers
- Federated Learning for UAV-Based Spectrum Sensing: Enhancing Accuracy Through SNR-Weighted Model Aggregation [0.0]
Unmanned aerial vehicle (UAV) networks need different points of view concerning 3D space, its challenges, and opportunities.
We propose a federated learning (FL)-based method for spectrum sensing in UAV networks to account for their distributed nature and limited computational capacity.
We also develop a federated aggregation method, namely FedSNR, that considers the signal-to-noise ratio observed by UAVs to acquire a global model.
arXiv Detail & Related papers (2024-11-17T19:24:49Z) - Vertical Federated Learning Hybrid Local Pre-training [4.31644387824845]
We propose a novel VFL Hybrid Local Pre-training (VFLHLP) approach for Vertical Federated Learning (VFL)
VFLHLP first pre-trains local networks on the local data of participating parties.
Then it utilizes these pre-trained networks to adjust the sub-model for the labeled party or enhance representation learning for other parties during downstream federated learning on aligned data.
arXiv Detail & Related papers (2024-05-20T08:57:39Z) - Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Improving Deep Representation Learning via Auxiliary Learnable Target Coding [69.79343510578877]
This paper introduces a novel learnable target coding as an auxiliary regularization of deep representation learning.
Specifically, a margin-based triplet loss and a correlation consistency loss on the proposed target codes are designed to encourage more discriminative representations.
arXiv Detail & Related papers (2023-05-30T01:38:54Z) - Low-Latency Cooperative Spectrum Sensing via Truncated Vertical
Federated Learning [51.51440623636274]
We propose a vertical federated learning (VFL) framework to exploit the distributed features across multiple secondary users (SUs) without compromising data privacy.
To accelerate the training process, we propose a truncated vertical federated learning (T-VFL) algorithm.
The convergence performance of T-VFL is provided via mathematical analysis and justified by simulation results.
arXiv Detail & Related papers (2022-08-07T10:39:27Z) - Deep Frequency Filtering for Domain Generalization [55.66498461438285]
Deep Neural Networks (DNNs) have preferences for some frequency components in the learning process.
We propose Deep Frequency Filtering (DFF) for learning domain-generalizable features.
We show that applying our proposed DFF on a plain baseline outperforms the state-of-the-art methods on different domain generalization tasks.
arXiv Detail & Related papers (2022-03-23T05:19:06Z) - Vertical Federated Learning: Challenges, Methodologies and Experiments [34.4865409422585]
vertical learning (VFL) is capable of constructing a hyper ML model by embracing sub-models from different clients.
In this paper, we discuss key challenges in VFL with effective solutions, and conduct experiments on real-life datasets.
arXiv Detail & Related papers (2022-02-09T06:56:41Z) - Mobility-Aware Cluster Federated Learning in Hierarchical Wireless
Networks [81.83990083088345]
We develop a theoretical model to characterize the hierarchical federated learning (HFL) algorithm in wireless networks.
Our analysis proves that the learning performance of HFL deteriorates drastically with highly-mobile users.
To circumvent these issues, we propose a mobility-aware cluster federated learning (MACFL) algorithm.
arXiv Detail & Related papers (2021-08-20T10:46:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.