Revisiting Communication-Efficient Federated Learning with Balanced
Global and Local Updates
- URL: http://arxiv.org/abs/2205.01470v1
- Date: Tue, 3 May 2022 13:05:26 GMT
- Title: Revisiting Communication-Efficient Federated Learning with Balanced
Global and Local Updates
- Authors: Zhigang Yan, Dong Li, Zhichao Zhang and Jiguang He
- Abstract summary: We investigate and analyze the optimal trade-off between the number of local trainings and that of global aggregations.
Our proposed scheme can achieve a better performance in terms of the prediction accuracy, and converge much faster than the baseline schemes.
- Score: 14.851898446967672
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In federated learning (FL), a number of devices train their local models and
upload the corresponding parameters or gradients to the base station (BS) to
update the global model while protecting their data privacy. However, due to
the limited computation and communication resources, the number of local
trainings (a.k.a. local update) and that of aggregations (a.k.a. global update)
need to be carefully chosen. In this paper, we investigate and analyze the
optimal trade-off between the number of local trainings and that of global
aggregations to speed up the convergence and enhance the prediction accuracy
over the existing works. Our goal is to minimize the global loss function under
both the delay and the energy consumption constraints. In order to make the
optimization problem tractable, we derive a new and tight upper bound on the
loss function, which allows us to obtain closed-form expressions for the number
of local trainings and that of global aggregations. Simulation results show
that our proposed scheme can achieve a better performance in terms of the
prediction accuracy, and converge much faster than the baseline schemes.
Related papers
- Neighborhood and Global Perturbations Supported SAM in Federated Learning: From Local Tweaks To Global Awareness [29.679323144520037]
Federated Learning (FL) can be coordinated under the orchestration of a central server to build a privacy-preserving model.
We propose a novel FL algorithm, FedTOGA, designed to consider generalization objectives while maintaining minimal uplink communication overhead.
arXiv Detail & Related papers (2024-08-26T09:42:18Z) - Decentralized Federated Learning Over Imperfect Communication Channels [68.08499874460857]
This paper analyzes the impact of imperfect communication channels on decentralized federated learning (D-FL)
It determines the optimal number of local aggregations per training round, adapting to the network topology and imperfect channels.
It is seen that D-FL, with an optimal number of local aggregations, can outperform its potential alternatives by over 10% in training accuracy.
arXiv Detail & Related papers (2024-05-21T16:04:32Z) - Dynamic Regularized Sharpness Aware Minimization in Federated Learning: Approaching Global Consistency and Smooth Landscape [59.841889495864386]
In federated learning (FL), a cluster of local clients are chaired under the coordination of a global server.
Clients are prone to overfit into their own optima, which extremely deviates from the global objective.
ttfamily FedSMOO adopts a dynamic regularizer to guarantee the local optima towards the global objective.
Our theoretical analysis indicates that ttfamily FedSMOO achieves fast $mathcalO (1/T)$ convergence rate with low bound generalization.
arXiv Detail & Related papers (2023-05-19T10:47:44Z) - Delay-Aware Hierarchical Federated Learning [7.292078085289465]
The paper introduces delay-aware hierarchical federated learning (DFL) to improve the efficiency of distributed machine learning (ML) model training.
During global synchronization, the cloud server consolidates local models with an outdated global model using a convex control algorithm.
Numerical evaluations show DFL's superior performance in terms of faster global model, reduced convergence resource, and evaluations against communication delays.
arXiv Detail & Related papers (2023-03-22T09:23:29Z) - Integrating Local Real Data with Global Gradient Prototypes for
Classifier Re-Balancing in Federated Long-Tailed Learning [60.41501515192088]
Federated Learning (FL) has become a popular distributed learning paradigm that involves multiple clients training a global model collaboratively.
The data samples usually follow a long-tailed distribution in the real world, and FL on the decentralized and long-tailed data yields a poorly-behaved global model.
In this work, we integrate the local real data with the global gradient prototypes to form the local balanced datasets.
arXiv Detail & Related papers (2023-01-25T03:18:10Z) - Global Update Guided Federated Learning [11.731231528534035]
Federated learning protects data privacy and security by exchanging models instead of data.
We propose global-update-guided federated learning (FedGG), which introduces a model-cosine loss into local objective functions.
Numerical simulations show that FedGG has a significant improvement on model convergence accuracies and speeds.
arXiv Detail & Related papers (2022-04-08T08:36:26Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Local Adaptivity in Federated Learning: Convergence and Consistency [25.293584783673413]
Federated learning (FL) framework trains a machine learning model using decentralized data stored at edge client devices by periodically aggregating locally trained models.
We show in both theory and practice that while local adaptive methods can accelerate convergence, they can cause a non-vanishing solution bias.
We propose correction techniques to overcome this inconsistency and complement the local adaptive methods for FL.
arXiv Detail & Related papers (2021-06-04T07:36:59Z) - Fast-Convergent Federated Learning [82.32029953209542]
Federated learning is a promising solution for distributing machine learning tasks through modern networks of mobile devices.
We propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training.
arXiv Detail & Related papers (2020-07-26T14:37:51Z) - Federated Learning With Quantized Global Model Updates [84.55126371346452]
We study federated learning, which enables mobile devices to utilize their local datasets to train a global model.
We introduce a lossy FL (LFL) algorithm, in which both the global model and the local model updates are quantized before being transmitted.
arXiv Detail & Related papers (2020-06-18T16:55:20Z) - Jointly Optimizing Dataset Size and Local Updates in Heterogeneous
Mobile Edge Learning [11.191719032853527]
This paper proposes to maximize the accuracy of a distributed machine learning (ML) model trained on learners connected via the resource-constrained wireless edge.
We jointly optimize the number of local/global updates and the task size allocation to minimize the loss while taking into account heterogeneous communication and computation capabilities of each learner.
arXiv Detail & Related papers (2020-06-12T18:19:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.