Adaptive Federated Optimization
- URL: http://arxiv.org/abs/2003.00295v5
- Date: Wed, 8 Sep 2021 23:37:17 GMT
- Title: Adaptive Federated Optimization
- Authors: Sashank Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith
Rush, Jakub Kone\v{c}n\'y, Sanjiv Kumar, H. Brendan McMahan
- Abstract summary: In Federated learning, a large number of clients coordinate with a central server to learn a model without sharing their own data.
adaptive optimization methods have notable success in combating such issues.
We show that the use adaptives can significantly improve the performance of federated learning.
- Score: 43.78438670284309
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning is a distributed machine learning paradigm in which a
large number of clients coordinate with a central server to learn a model
without sharing their own training data. Standard federated optimization
methods such as Federated Averaging (FedAvg) are often difficult to tune and
exhibit unfavorable convergence behavior. In non-federated settings, adaptive
optimization methods have had notable success in combating such issues. In this
work, we propose federated versions of adaptive optimizers, including Adagrad,
Adam, and Yogi, and analyze their convergence in the presence of heterogeneous
data for general non-convex settings. Our results highlight the interplay
between client heterogeneity and communication efficiency. We also perform
extensive experiments on these methods and show that the use of adaptive
optimizers can significantly improve the performance of federated learning.
Related papers
- Efficient and Robust Regularized Federated Recommendation [52.24782464815489]
The recommender system (RSRS) addresses both user preference and privacy concerns.
We propose a novel method that incorporates non-uniform gradient descent to improve communication efficiency.
RFRecF's superior robustness compared to diverse baselines.
arXiv Detail & Related papers (2024-11-03T12:10:20Z) - FADAS: Towards Federated Adaptive Asynchronous Optimization [56.09666452175333]
Federated learning (FL) has emerged as a widely adopted training paradigm for privacy-preserving machine learning.
This paper introduces federated adaptive asynchronous optimization, named FADAS, a novel method that incorporates asynchronous updates into adaptive federated optimization with provable guarantees.
We rigorously establish the convergence rate of the proposed algorithms and empirical results demonstrate the superior performance of FADAS over other asynchronous FL baselines.
arXiv Detail & Related papers (2024-07-25T20:02:57Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Locally Adaptive Federated Learning [30.19411641685853]
Federated learning is a paradigm of distributed machine learning in which multiple clients coordinate with a central server to learn a model.
Standard federated optimization methods such as Federated Averaging (FedAvg) ensure generalization among the clients.
We propose locally federated learning algorithms, that leverage the local geometric information for each client function.
arXiv Detail & Related papers (2023-07-12T17:02:32Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - Accelerated Federated Learning with Decoupled Adaptive Optimization [53.230515878096426]
federated learning (FL) framework enables clients to collaboratively learn a shared model while keeping privacy of training data on clients.
Recently, many iterations efforts have been made to generalize centralized adaptive optimization methods, such as SGDM, Adam, AdaGrad, etc., to federated settings.
This work aims to develop novel adaptive optimization methods for FL from the perspective of dynamics of ordinary differential equations (ODEs)
arXiv Detail & Related papers (2022-07-14T22:46:43Z) - Communication-Efficient Adaptive Federated Learning [17.721884358895686]
Federated learning is a machine learning paradigm that enables clients to jointly train models without sharing their own localized data.
The implementation of federated learning in practice still faces numerous challenges, such as the large communication overhead.
We propose a novel communication-efficient adaptive learning method (FedCAMS) with theoretical convergence guarantees.
arXiv Detail & Related papers (2022-05-05T15:47:04Z) - Effective Federated Adaptive Gradient Methods with Non-IID Decentralized
Data [18.678289386084113]
Federated learning allows devices to collaboratively learn a model without data sharing.
We propose Federated AGMs, which employ both the firstorder and second-ordercalibratea.
We compare schemes of calibration for federated learning, including standard Adam byepsilon.
arXiv Detail & Related papers (2020-09-14T16:37:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.