Fast Federated Learning by Balancing Communication Trade-Offs
- URL: http://arxiv.org/abs/2105.11028v1
- Date: Sun, 23 May 2021 21:55:14 GMT
- Title: Fast Federated Learning by Balancing Communication Trade-Offs
- Authors: Milad Khademi Nori, Sangseok Yun, and Il-Min Kim
- Abstract summary: Federated Learning (FL) has recently received a lot of attention for large-scale privacy-preserving machine learning.
High communication overheads due to frequent gradient transmissions decelerate FL.
We propose an enhanced FL scheme, namely Fast FL (FFL), that jointly and dynamically adjusts the two variables to minimize the learning error.
- Score: 9.89867121050673
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) has recently received a lot of attention for
large-scale privacy-preserving machine learning. However, high communication
overheads due to frequent gradient transmissions decelerate FL. To mitigate the
communication overheads, two main techniques have been studied: (i) local
update of weights characterizing the trade-off between communication and
computation and (ii) gradient compression characterizing the trade-off between
communication and precision. To the best of our knowledge, studying and
balancing those two trade-offs jointly and dynamically while considering their
impacts on convergence has remained unresolved even though it promises
significantly faster FL. In this paper, we first formulate our problem to
minimize learning error with respect to two variables: local update
coefficients and sparsity budgets of gradient compression who characterize
trade-offs between communication and computation/precision, respectively. We
then derive an upper bound of the learning error in a given wall-clock time
considering the interdependency between the two variables. Based on this
theoretical analysis, we propose an enhanced FL scheme, namely Fast FL (FFL),
that jointly and dynamically adjusts the two variables to minimize the learning
error. We demonstrate that FFL consistently achieves higher accuracies faster
than similar schemes existing in the literature.
Related papers
- SpaFL: Communication-Efficient Federated Learning with Sparse Models and Low computational Overhead [75.87007729801304]
SpaFL: a communication-efficient FL framework is proposed to optimize sparse model structures with low computational overhead.
Experiments show that SpaFL improves accuracy while requiring much less communication and computing resources compared to sparse baselines.
arXiv Detail & Related papers (2024-06-01T13:10:35Z) - FedComLoc: Communication-Efficient Distributed Training of Sparse and Quantized Models [56.21666819468249]
Federated Learning (FL) has garnered increasing attention due to its unique characteristic of allowing heterogeneous clients to process their private data locally and interact with a central server.
We introduce FedComLoc, integrating practical and effective compression into emphScaffnew to further enhance communication efficiency.
arXiv Detail & Related papers (2024-03-14T22:29:59Z) - Sparse Training for Federated Learning with Regularized Error Correction [9.852567834643292]
Federated Learning (FL) has attracted much interest due to the significant advantages it brings to training deep neural network (DNN) models.
FLARE presents a novel sparse training approach via accumulated pulling of the updated models with regularization on the embeddings in the FL process.
The performance of FLARE is validated through extensive experiments on diverse and complex models, achieving a remarkable sparsity level (10 times and more beyond the current state-of-the-art) along with significantly improved accuracy.
arXiv Detail & Related papers (2023-12-21T12:36:53Z) - How Robust is Federated Learning to Communication Error? A Comparison
Study Between Uplink and Downlink Channels [13.885735785986164]
This paper investigates the robustness of federated learning to the uplink and downlink communication error.
It is shown that the uplink communication in FL can tolerate a higher bit error rate (BER) than downlink communication.
arXiv Detail & Related papers (2023-10-25T14:03:11Z) - Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - Efficient Adaptive Federated Optimization of Federated Learning for IoT [0.0]
This paper proposes a novel efficient adaptive federated optimization (EAFO) algorithm to improve efficiency of Federated Learning (FL)
FL is a distributed privacy-preserving learning framework that enables IoT devices to train global model through sharing model parameters.
Experiment results show that the proposed EAFO can achieve higher accuracies faster.
arXiv Detail & Related papers (2022-06-23T01:49:12Z) - Resource Allocation for Compression-aided Federated Learning with High
Distortion Rate [3.7530276852356645]
We formulate an optimization-aided FL problem between the distortion rate, number of participating IoT devices, and convergence rate.
By actively controlling participating IoT devices, we can avoid the training divergence of compression-aided FL while maintaining the communication efficiency.
arXiv Detail & Related papers (2022-06-02T05:00:37Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Delay Minimization for Federated Learning Over Wireless Communication
Networks [172.42768672943365]
The problem of delay computation for federated learning (FL) over wireless communication networks is investigated.
A bisection search algorithm is proposed to obtain the optimal solution.
Simulation results show that the proposed algorithm can reduce delay by up to 27.3% compared to conventional FL methods.
arXiv Detail & Related papers (2020-07-05T19:00:07Z) - Detached Error Feedback for Distributed SGD with Random Sparsification [98.98236187442258]
Communication bottleneck has been a critical problem in large-scale deep learning.
We propose a new distributed error feedback (DEF) algorithm, which shows better convergence than error feedback for non-efficient distributed problems.
We also propose DEFA to accelerate the generalization of DEF, which shows better bounds than DEF.
arXiv Detail & Related papers (2020-04-11T03:50:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.