FedADC: Accelerated Federated Learning with Drift Control
- URL: http://arxiv.org/abs/2012.09102v1
- Date: Wed, 16 Dec 2020 17:49:37 GMT
- Title: FedADC: Accelerated Federated Learning with Drift Control
- Authors: Emre Ozfatura and Kerem Ozfatura and Deniz Gunduz
- Abstract summary: Federated learning (FL) has become de facto framework for collaborative learning among edge devices with privacy concern.
Large scale implementation of FL brings new challenges, such as the incorporation of acceleration techniques designed for SGD into the distributed setting, and mitigation of the drift problem due to non-homogeneous distribution of local datasets.
We show that it is possible to address both problems using a single strategy without any major alteration to the FL framework, or introducing additional computation and communication load.
We propose FedADC, which is an accelerated FL algorithm with drift control.
- Score: 6.746400031322727
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) has become de facto framework for collaborative
learning among edge devices with privacy concern. The core of the FL strategy
is the use of stochastic gradient descent (SGD) in a distributed manner. Large
scale implementation of FL brings new challenges, such as the incorporation of
acceleration techniques designed for SGD into the distributed setting, and
mitigation of the drift problem due to non-homogeneous distribution of local
datasets. These two problems have been separately studied in the literature;
whereas, in this paper, we show that it is possible to address both problems
using a single strategy without any major alteration to the FL framework, or
introducing additional computation and communication load. To achieve this
goal, we propose FedADC, which is an accelerated FL algorithm with drift
control. We empirically illustrate the advantages of FedADC.
Related papers
- Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - Magnitude Matters: Fixing SIGNSGD Through Magnitude-Aware Sparsification
in the Presence of Data Heterogeneity [60.791736094073]
Communication overhead has become one of the major bottlenecks in the distributed training of deep neural networks.
We propose a magnitude-driven sparsification scheme, which addresses the non-convergence issue of SIGNSGD.
The proposed scheme is validated through experiments on Fashion-MNIST, CIFAR-10, and CIFAR-100 datasets.
arXiv Detail & Related papers (2023-02-19T17:42:35Z) - FedRC: Tackling Diverse Distribution Shifts Challenge in Federated Learning by Robust Clustering [4.489171618387544]
Federated Learning (FL) is a machine learning paradigm that safeguards privacy by retaining client data on edge devices.
In this paper, we identify the learning challenges posed by the simultaneous occurrence of diverse distribution shifts.
We propose a novel clustering algorithm framework, dubbed as FedRC, which adheres to our proposed clustering principle.
arXiv Detail & Related papers (2023-01-29T06:50:45Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Distributionally Robust Federated Averaging [19.875176871167966]
We present communication efficient distributed algorithms for robust learning periodic averaging with adaptive sampling.
We give corroborating experimental evidence for our theoretical results in federated learning settings.
arXiv Detail & Related papers (2021-02-25T03:32:09Z) - Detached Error Feedback for Distributed SGD with Random Sparsification [98.98236187442258]
Communication bottleneck has been a critical problem in large-scale deep learning.
We propose a new distributed error feedback (DEF) algorithm, which shows better convergence than error feedback for non-efficient distributed problems.
We also propose DEFA to accelerate the generalization of DEF, which shows better bounds than DEF.
arXiv Detail & Related papers (2020-04-11T03:50:59Z) - Stochastic-Sign SGD for Federated Learning with Theoretical Guarantees [49.91477656517431]
Quantization-based solvers have been widely adopted in Federated Learning (FL)
No existing methods enjoy all the aforementioned properties.
We propose an intuitively-simple yet theoretically-simple method based on SIGNSGD to bridge the gap.
arXiv Detail & Related papers (2020-02-25T15:12:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.