Low-Latency Cooperative Spectrum Sensing via Truncated Vertical
Federated Learning
- URL: http://arxiv.org/abs/2208.03694v1
- Date: Sun, 7 Aug 2022 10:39:27 GMT
- Title: Low-Latency Cooperative Spectrum Sensing via Truncated Vertical
Federated Learning
- Authors: Zezhong Zhang, Guangxu Zhu, Shuguang Cui
- Abstract summary: We propose a vertical federated learning (VFL) framework to exploit the distributed features across multiple secondary users (SUs) without compromising data privacy.
To accelerate the training process, we propose a truncated vertical federated learning (T-VFL) algorithm.
The convergence performance of T-VFL is provided via mathematical analysis and justified by simulation results.
- Score: 51.51440623636274
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, the exponential increase in the demand of wireless data
transmission rises the urgency for accurate spectrum sensing approaches to
improve spectrum efficiency. The unreliability of conventional spectrum sensing
methods by using measurements from a single secondary user (SU) has motivated
research on cooperative spectrum sensing (CSS). In this work, we propose a
vertical federated learning (VFL) framework to exploit the distributed features
across multiple SUs without compromising data privacy. However, the repetitive
training process in VFL faces the issue of high communication latency. To
accelerate the training process, we propose a truncated vertical federated
learning (T-VFL) algorithm, where the training latency is highly reduced by
integrating the standard VFL algorithm with a channel-aware user scheduling
policy. The convergence performance of T-VFL is provided via mathematical
analysis and justified by simulation results. Moreover, to guarantee the
convergence performance of the T-VFL algorithm, we conclude three design rules
on the neural architectures used under the VFL framework, whose effectiveness
is proved through simulations.
Related papers
- Online Vertical Federated Learning for Cooperative Spectrum Sensing [8.081617656116139]
Online vertical federated learning (OVFL) is designed to address the challenges of ongoing data stream and shifting learning goals.
OVFL achieves a sublinear regret bound, thereby evidencing its efficiency.
arXiv Detail & Related papers (2023-12-18T17:19:53Z) - Semi-Federated Learning: Convergence Analysis and Optimization of A
Hybrid Learning Framework [70.83511997272457]
We propose a semi-federated learning (SemiFL) paradigm to leverage both the base station (BS) and devices for a hybrid implementation of centralized learning (CL) and FL.
We propose a two-stage algorithm to solve this intractable problem, in which we provide the closed-form solutions to the beamformers.
arXiv Detail & Related papers (2023-10-04T03:32:39Z) - Gradient Sparsification for Efficient Wireless Federated Learning with
Differential Privacy [25.763777765222358]
Federated learning (FL) enables distributed clients to collaboratively train a machine learning model without sharing raw data with each other.
As the model size grows, the training latency due to limited transmission bandwidth and private information degrades while using differential privacy (DP) protection.
We propose sparsification empowered FL framework wireless channels, in over to improve training efficiency without sacrificing convergence performance.
arXiv Detail & Related papers (2023-04-09T05:21:15Z) - Towards Communication-efficient Vertical Federated Learning Training via
Cache-enabled Local Updates [25.85564668511386]
We introduce CELU-VFL, a novel and efficient Vertical Learning framework.
CELU-VFL exploits the local update technique to reduce the cross-party communication rounds.
We show that CELU-VFL can be up to six times faster than the existing works.
arXiv Detail & Related papers (2022-07-29T12:10:36Z) - Time-triggered Federated Learning over Wireless Networks [48.389824560183776]
We present a time-triggered FL algorithm (TT-Fed) over wireless networks.
Our proposed TT-Fed algorithm improves the converged test accuracy by up to 12.5% and 5%, respectively.
arXiv Detail & Related papers (2022-04-26T16:37:29Z) - AsySQN: Faster Vertical Federated Learning Algorithms with Better
Computation Resource Utilization [159.75564904944707]
We propose an asynchronous quasi-Newton (AsySQN) framework for vertical federated learning (VFL)
The proposed algorithms make descent steps scaled by approximate without calculating the inverse Hessian matrix explicitly.
We show that the adopted asynchronous computation can make better use of the computation resource.
arXiv Detail & Related papers (2021-09-26T07:56:10Z) - Bayesian Federated Learning over Wireless Networks [87.37301441859925]
Federated learning is a privacy-preserving and distributed training method using heterogeneous data sets stored at local devices.
This paper presents an efficient modified BFL algorithm called scalableBFL (SBFL)
arXiv Detail & Related papers (2020-12-31T07:32:44Z) - Delay Minimization for Federated Learning Over Wireless Communication
Networks [172.42768672943365]
The problem of delay computation for federated learning (FL) over wireless communication networks is investigated.
A bisection search algorithm is proposed to obtain the optimal solution.
Simulation results show that the proposed algorithm can reduce delay by up to 27.3% compared to conventional FL methods.
arXiv Detail & Related papers (2020-07-05T19:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.