Computation and Communication Efficient Lightweighting Vertical Federated Learning
- URL: http://arxiv.org/abs/2404.00466v1
- Date: Sat, 30 Mar 2024 20:19:28 GMT
- Title: Computation and Communication Efficient Lightweighting Vertical Federated Learning
- Authors: Heqiang Wang, Jieming Bian, Lei Wang,
- Abstract summary: We introduce the concept of Lightweight Vertical Federated Learning (LVFL), targeting both computational and communication efficiencies.
This approach involves separate lightweighting strategies for the feature model, to improve computational efficiency, and for feature embedding, to enhance communication efficiency.
Our evaluation of the algorithm on a image classification dataset reveals that LVFL significantly alleviates computational and communication demands while preserving robust learning performance.
- Score: 4.742058918600289
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The exploration of computational and communication efficiency within Federated Learning (FL) has emerged as a prominent and crucial field of study. While most existing efforts to enhance these efficiencies have focused on Horizontal FL, the distinct processes and model structures of Vertical FL preclude the direct application of Horizontal FL-based techniques. In response, we introduce the concept of Lightweight Vertical Federated Learning (LVFL), targeting both computational and communication efficiencies. This approach involves separate lightweighting strategies for the feature model, to improve computational efficiency, and for feature embedding, to enhance communication efficiency. Moreover, we establish a convergence bound for our LVFL algorithm, which accounts for both communication and computational lightweighting ratios. Our evaluation of the algorithm on a image classification dataset reveals that LVFL significantly alleviates computational and communication demands while preserving robust learning performance. This work effectively addresses the gaps in communication and computational efficiency within Vertical FL.
Related papers
- SpaFL: Communication-Efficient Federated Learning with Sparse Models and Low computational Overhead [75.87007729801304]
SpaFL: a communication-efficient FL framework is proposed to optimize sparse model structures with low computational overhead.
Experiments show that SpaFL improves accuracy while requiring much less communication and computing resources compared to sparse baselines.
arXiv Detail & Related papers (2024-06-01T13:10:35Z) - Exploring the Practicality of Federated Learning: A Survey Towards the Communication Perspective [1.088537320059347]
Federated Learning (FL) is a promising paradigm that offers significant advancements in privacy-preserving, decentralized machine learning.
However, the practical deployment of FL systems faces a significant bottleneck: the communication overhead.
This survey investigates various strategies and advancements made in communication-efficient FL.
arXiv Detail & Related papers (2024-05-30T19:21:33Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Vertical Federated Learning over Cloud-RAN: Convergence Analysis and
System Optimization [82.12796238714589]
We propose a novel cloud radio access network (Cloud-RAN) based vertical FL system to enable fast and accurate model aggregation.
We characterize the convergence behavior of the vertical FL algorithm considering both uplink and downlink transmissions.
We establish a system optimization framework by joint transceiver and fronthaul quantization design, for which successive convex approximation and alternate convex search based system optimization algorithms are developed.
arXiv Detail & Related papers (2023-05-04T09:26:03Z) - Resource Allocation for Compression-aided Federated Learning with High
Distortion Rate [3.7530276852356645]
We formulate an optimization-aided FL problem between the distortion rate, number of participating IoT devices, and convergence rate.
By actively controlling participating IoT devices, we can avoid the training divergence of compression-aided FL while maintaining the communication efficiency.
arXiv Detail & Related papers (2022-06-02T05:00:37Z) - Communication-Efficient Consensus Mechanism for Federated Reinforcement
Learning [20.891460617583302]
We show that FL can improve the policy performance of IRL in terms of training efficiency and stability.
To reach a good balance between improving the model's convergence performance and reducing the required communication and computation overheads, this paper proposes a system utility function.
arXiv Detail & Related papers (2022-01-30T04:04:24Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - AsySQN: Faster Vertical Federated Learning Algorithms with Better
Computation Resource Utilization [159.75564904944707]
We propose an asynchronous quasi-Newton (AsySQN) framework for vertical federated learning (VFL)
The proposed algorithms make descent steps scaled by approximate without calculating the inverse Hessian matrix explicitly.
We show that the adopted asynchronous computation can make better use of the computation resource.
arXiv Detail & Related papers (2021-09-26T07:56:10Z) - 1-Bit Compressive Sensing for Efficient Federated Learning Over the Air [32.14738452396869]
This paper develops and analyzes a communication-efficient scheme for learning (FL) over the air, which incorporates 1-bit sensing (CS) into analog aggregation transmissions.
For scalable computing, we develop an efficient implementation that is suitable for large-scale networks.
Simulation results show that our proposed 1-bit CS based FL over the air achieves comparable performance to the ideal case.
arXiv Detail & Related papers (2021-03-30T03:50:31Z) - Delay Minimization for Federated Learning Over Wireless Communication
Networks [172.42768672943365]
The problem of delay computation for federated learning (FL) over wireless communication networks is investigated.
A bisection search algorithm is proposed to obtain the optimal solution.
Simulation results show that the proposed algorithm can reduce delay by up to 27.3% compared to conventional FL methods.
arXiv Detail & Related papers (2020-07-05T19:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.