Communication and Energy Efficient Federated Learning using Zero-Order Optimization Technique
- URL: http://arxiv.org/abs/2409.16456v1
- Date: Tue, 24 Sep 2024 20:57:22 GMT
- Title: Communication and Energy Efficient Federated Learning using Zero-Order Optimization Technique
- Authors: Elissa Mhanna, Mohamad Assaad,
- Abstract summary: Federated learning (FL) is a popular machine learning technique that enables multiple users to collaboratively train a model while maintaining the user data privacy.
A significant challenge in FL is the communication bottleneck in the upload direction, and thus the corresponding energy consumption of the devices.
We show the superiority of our method, in terms of communication overhead and energy, as compared to standard gradient-based FL methods.
- Score: 14.986031916712108
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) is a popular machine learning technique that enables multiple users to collaboratively train a model while maintaining the user data privacy. A significant challenge in FL is the communication bottleneck in the upload direction, and thus the corresponding energy consumption of the devices, attributed to the increasing size of the model/gradient. In this paper, we address this issue by proposing a zero-order (ZO) optimization method that requires the upload of a quantized single scalar per iteration by each device instead of the whole gradient vector. We prove its theoretical convergence and find an upper bound on its convergence rate in the non-convex setting, and we discuss its implementation in practical scenarios. Our FL method and the corresponding convergence analysis take into account the impact of quantization and packet dropping due to wireless errors. We show also the superiority of our method, in terms of communication overhead and energy consumption, as compared to standard gradient-based FL methods.
Related papers
- Rendering Wireless Environments Useful for Gradient Estimators: A Zero-Order Stochastic Federated Learning Method [14.986031916712108]
Cross-device federated learning (FL) is a growing machine learning framework whereby multiple edge devices collaborate to train a model without disclosing their raw data.
We show how to harness the wireless channel in the learning algorithm itself instead of to analyze it remove its impact.
arXiv Detail & Related papers (2024-01-30T21:46:09Z) - Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Vertical Federated Learning over Cloud-RAN: Convergence Analysis and
System Optimization [82.12796238714589]
We propose a novel cloud radio access network (Cloud-RAN) based vertical FL system to enable fast and accurate model aggregation.
We characterize the convergence behavior of the vertical FL algorithm considering both uplink and downlink transmissions.
We establish a system optimization framework by joint transceiver and fronthaul quantization design, for which successive convex approximation and alternate convex search based system optimization algorithms are developed.
arXiv Detail & Related papers (2023-05-04T09:26:03Z) - Performance Optimization for Variable Bitwidth Federated Learning in
Wireless Networks [103.22651843174471]
This paper considers improving wireless communication and computation efficiency in federated learning (FL) via model quantization.
In the proposed bitwidth FL scheme, edge devices train and transmit quantized versions of their local FL model parameters to a coordinating server, which aggregates them into a quantized global model and synchronizes the devices.
We show that the FL training process can be described as a Markov decision process and propose a model-based reinforcement learning (RL) method to optimize action selection over iterations.
arXiv Detail & Related papers (2022-09-21T08:52:51Z) - Resource Allocation for Compression-aided Federated Learning with High
Distortion Rate [3.7530276852356645]
We formulate an optimization-aided FL problem between the distortion rate, number of participating IoT devices, and convergence rate.
By actively controlling participating IoT devices, we can avoid the training divergence of compression-aided FL while maintaining the communication efficiency.
arXiv Detail & Related papers (2022-06-02T05:00:37Z) - Resource-Efficient and Delay-Aware Federated Learning Design under Edge
Heterogeneity [10.702853653891902]
Federated learning (FL) has emerged as a popular methodology for distributing machine learning across wireless edge devices.
In this work, we consider optimizing the tradeoff between model performance and resource utilization in FL.
Our proposed StoFedDelAv incorporates a localglobal model combiner into the FL computation step.
arXiv Detail & Related papers (2021-12-27T22:30:15Z) - 1-Bit Compressive Sensing for Efficient Federated Learning Over the Air [32.14738452396869]
This paper develops and analyzes a communication-efficient scheme for learning (FL) over the air, which incorporates 1-bit sensing (CS) into analog aggregation transmissions.
For scalable computing, we develop an efficient implementation that is suitable for large-scale networks.
Simulation results show that our proposed 1-bit CS based FL over the air achieves comparable performance to the ideal case.
arXiv Detail & Related papers (2021-03-30T03:50:31Z) - UVeQFed: Universal Vector Quantization for Federated Learning [179.06583469293386]
Federated learning (FL) is an emerging approach to train such learning models without requiring the users to share their possibly private labeled data.
In FL, each user trains its copy of the learning model locally. The server then collects the individual updates and aggregates them into a global model.
We show that combining universal vector quantization methods with FL yields a decentralized training system in which the compression of the trained models induces only a minimum distortion.
arXiv Detail & Related papers (2020-06-05T07:10:22Z) - Gradient Statistics Aware Power Control for Over-the-Air Federated
Learning [59.40860710441232]
Federated learning (FL) is a promising technique that enables many edge devices to train a machine learning model collaboratively in wireless networks.
This paper studies the power control problem for over-the-air FL by taking gradient statistics into account.
arXiv Detail & Related papers (2020-03-04T14:06:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.