Achieving Linear Speedup with Partial Worker Participation in Non-IID
Federated Learning
- URL: http://arxiv.org/abs/2101.11203v2
- Date: Thu, 25 Feb 2021 23:15:23 GMT
- Title: Achieving Linear Speedup with Partial Worker Participation in Non-IID
Federated Learning
- Authors: Haibo Yang, Minghong Fang, Jia Liu
- Abstract summary: Federated learning (FL) is a distributed machine learning architecture that leverages a large number of workers to jointly learn a model with decentralized data.
We show that the linear speedup for convergence is achievable under non-i.i.d. datasets with partial worker participation in FL.
- Score: 6.994020662415705
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is a distributed machine learning architecture that
leverages a large number of workers to jointly learn a model with decentralized
data. FL has received increasing attention in recent years thanks to its data
privacy protection, communication efficiency and a linear speedup for
convergence in training (i.e., convergence performance increases linearly with
respect to the number of workers). However, existing studies on linear speedup
for convergence are only limited to the assumptions of i.i.d. datasets across
workers and/or full worker participation, both of which rarely hold in
practice. So far, it remains an open question whether or not the linear speedup
for convergence is achievable under non-i.i.d. datasets with partial worker
participation in FL. In this paper, we show that the answer is affirmative.
Specifically, we show that the federated averaging (FedAvg) algorithm (with
two-sided learning rates) on non-i.i.d. datasets in non-convex settings
achieves a convergence rate $\mathcal{O}(\frac{1}{\sqrt{mKT}} + \frac{1}{T})$
for full worker participation and a convergence rate
$\mathcal{O}(\frac{1}{\sqrt{nKT}} + \frac{1}{T})$ for partial worker
participation, where $K$ is the number of local steps, $T$ is the number of
total communication rounds, $m$ is the total worker number and $n$ is the
worker number in one communication round if for partial worker participation.
Our results also reveal that the local steps in FL could help the convergence
and show that the maximum number of local steps can be improved to $T/m$. We
conduct extensive experiments on MNIST and CIFAR-10 to verify our theoretical
results.
Related papers
- Vertical Federated Learning with Missing Features During Training and Inference [37.44022318612869]
We propose a vertical federated learning method for efficient training and inference of neural network-based models.
Our approach is simple yet effective, relying on the strategic sharing of parameters on task-sampling and inference.
Numerical experiments show improved performance of LASER-VFL over the baselines.
arXiv Detail & Related papers (2024-10-29T22:09:31Z) - Inverse Entropic Optimal Transport Solves Semi-supervised Learning via Data Likelihood Maximization [65.8915778873691]
conditional distributions is a central problem in machine learning.
We propose a new learning paradigm that integrates both paired and unpaired data.
Our approach also connects intriguingly with inverse entropic optimal transport (OT)
arXiv Detail & Related papers (2024-10-03T16:12:59Z) - A Specialized Semismooth Newton Method for Kernel-Based Optimal
Transport [92.96250725599958]
Kernel-based optimal transport (OT) estimators offer an alternative, functional estimation procedure to address OT problems from samples.
We show that our SSN method achieves a global convergence rate of $O (1/sqrtk)$, and a local quadratic convergence rate under standard regularity conditions.
arXiv Detail & Related papers (2023-10-21T18:48:45Z) - On the Convergence of Federated Averaging under Partial Participation for Over-parameterized Neural Networks [13.2844023993979]
Federated learning (FL) is a widely employed distributed paradigm for collaboratively machine learning models from multiple clients without sharing local data.
In this paper, we show that FedAvg converges to a global minimum at a global rate at a global focus.
arXiv Detail & Related papers (2023-10-09T07:56:56Z) - DFedADMM: Dual Constraints Controlled Model Inconsistency for
Decentralized Federated Learning [52.83811558753284]
Decentralized learning (DFL) discards the central server and establishes a decentralized communication network.
Existing DFL methods still suffer from two major challenges: local inconsistency and local overfitting.
arXiv Detail & Related papers (2023-08-16T11:22:36Z) - Achieving Linear Speedup in Non-IID Federated Bilevel Learning [16.56643290676128]
We propose a new federated bilevel algorithm named FedMBO.
We show that FedMBO achieves a convergence rate of $mathcalObig(frac1sqrtnK+frac1K+fracsqrtnK3/2big)$ on non-i.i.d.datasets.
This is the first theoretical linear speedup result for non-i.i.d.federated bilevel optimization.
arXiv Detail & Related papers (2023-02-10T18:28:00Z) - Communication-Efficient Adam-Type Algorithms for Distributed Data Mining [93.50424502011626]
We propose a class of novel distributed Adam-type algorithms (emphi.e., SketchedAMSGrad) utilizing sketching.
Our new algorithm achieves a fast convergence rate of $O(frac1sqrtnT + frac1(k/d)2 T)$ with the communication cost of $O(k log(d))$ at each iteration.
arXiv Detail & Related papers (2022-10-14T01:42:05Z) - TURF: A Two-factor, Universal, Robust, Fast Distribution Learning
Algorithm [64.13217062232874]
One of its most powerful and successful modalities approximates every distribution to an $ell$ distance essentially at most a constant times larger than its closest $t$-piece degree-$d_$.
We provide a method that estimates this number near-optimally, hence helps approach the best possible approximation.
arXiv Detail & Related papers (2022-02-15T03:49:28Z) - FedLGA: Towards System-Heterogeneity of Federated Learning via Local
Gradient Approximation [21.63719641718363]
We formalize the system-heterogeneous FL problem and propose a new algorithm, called FedLGA, which addresses this problem by bridging the divergence local model updates via epoch approximation.
The results of comprehensive experiments on multiple datasets show that FedLGA outperforms current FL benchmarks against the system-heterogeneity.
arXiv Detail & Related papers (2021-12-22T16:05:09Z) - CFedAvg: Achieving Efficient Communication and Fast Convergence in
Non-IID Federated Learning [8.702106020664612]
Federated learning (FL) is a prevailing distributed learning paradigm, where a large number of workers jointly learn a model without sharing their training data.
High communication costs could arise in FL due to deep-scale (deep) learning models and bandwidth-connected connections.
We introduce a distributed communication datasets called CFedAvg for FL with non-biased SNR-constrained compressors.
arXiv Detail & Related papers (2021-06-14T04:27:19Z) - Variance Reduced Local SGD with Lower Communication Complexity [52.44473777232414]
We propose Variance Reduced Local SGD to further reduce the communication complexity.
VRL-SGD achieves a emphlinear iteration speedup with a lower communication complexity $O(Tfrac12 Nfrac32)$ even if workers access non-identical datasets.
arXiv Detail & Related papers (2019-12-30T08:15:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.