From Deterioration to Acceleration: A Calibration Approach to
Rehabilitating Step Asynchronism in Federated Optimization
- URL: http://arxiv.org/abs/2112.09355v1
- Date: Fri, 17 Dec 2021 07:26:31 GMT
- Title: From Deterioration to Acceleration: A Calibration Approach to
Rehabilitating Step Asynchronism in Federated Optimization
- Authors: Feijie Wu, Song Guo, Haozhao Wang, Zhihao Qu, Haobo Zhang, Jie Zhang,
Ziming Liu
- Abstract summary: We propose a new algorithm textttFedaGrac, which calibrates the local direction to a predictive global orientation.
We theoretically prove that textttFedaGrac holds an improved order of convergence rate than the state-of-the-art approaches.
- Score: 13.755421424240048
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the setting of federated optimization, where a global model is aggregated
periodically, step asynchronism occurs when participants conduct model training
with fully utilizing their computational resources. It is well acknowledged
that step asynchronism leads to objective inconsistency under non-i.i.d. data,
which degrades the model accuracy. To address this issue, we propose a new
algorithm \texttt{FedaGrac}, which calibrates the local direction to a
predictive global orientation. Taking the advantage of estimated orientation,
we guarantee that the aggregated model does not excessively deviate from the
expected orientation while fully utilizing the local updates of faster nodes.
We theoretically prove that \texttt{FedaGrac} holds an improved order of
convergence rate than the state-of-the-art approaches and eliminates the
negative effect of step asynchronism. Empirical results show that our algorithm
accelerates the training and enhances the final accuracy.
Related papers
- FADAS: Towards Federated Adaptive Asynchronous Optimization [56.09666452175333]
Federated learning (FL) has emerged as a widely adopted training paradigm for privacy-preserving machine learning.
This paper introduces federated adaptive asynchronous optimization, named FADAS, a novel method that incorporates asynchronous updates into adaptive federated optimization with provable guarantees.
We rigorously establish the convergence rate of the proposed algorithms and empirical results demonstrate the superior performance of FADAS over other asynchronous FL baselines.
arXiv Detail & Related papers (2024-07-25T20:02:57Z) - Stragglers-Aware Low-Latency Synchronous Federated Learning via Layer-Wise Model Updates [71.81037644563217]
Synchronous federated learning (FL) is a popular paradigm for collaborative edge learning.
As some of the devices may have limited computational resources and varying availability, FL latency is highly sensitive to stragglers.
We propose straggler-aware layer-wise federated learning (SALF) that leverages the optimization procedure of NNs via backpropagation to update the global model in a layer-wise fashion.
arXiv Detail & Related papers (2024-03-27T09:14:36Z) - Momentum Approximation in Asynchronous Private Federated Learning [26.57367597853813]
momentum approximation can achieve $1.15 textrm--4times$ speed up in convergence compared to existing FLs with momentum.
Momentum approximation can be easily integrated in production FL systems with a minor communication and storage cost.
arXiv Detail & Related papers (2024-02-14T15:35:53Z) - Time-series Generation by Contrastive Imitation [87.51882102248395]
We study a generative framework that seeks to combine the strengths of both: Motivated by a moment-matching objective to mitigate compounding error, we optimize a local (but forward-looking) transition policy.
At inference, the learned policy serves as the generator for iterative sampling, and the learned energy serves as a trajectory-level measure for evaluating sample quality.
arXiv Detail & Related papers (2023-11-02T16:45:25Z) - Efficient Federated Learning via Local Adaptive Amended Optimizer with
Linear Speedup [90.26270347459915]
We propose a novel momentum-based algorithm via utilizing the global descent locally adaptive.
textitLADA could greatly reduce the communication rounds and achieves higher accuracy than several baselines.
arXiv Detail & Related papers (2023-07-30T14:53:21Z) - Blind Asynchronous Over-the-Air Federated Edge Learning [15.105440618101147]
Federated Edge Learning (FEEL) is a distributed machine learning technique.
We propose a novel synchronization-free method to recover the parameters of the global model over the air.
Our proposed algorithm is close to the ideal synchronized scenario by $10%$, and performs $4times$ better than the simple case.
arXiv Detail & Related papers (2022-10-31T16:54:14Z) - Efficient and Light-Weight Federated Learning via Asynchronous
Distributed Dropout [22.584080337157168]
Asynchronous learning protocols have regained attention lately, especially in the Federated Learning (FL) setup.
We propose textttAsyncDrop, a novel asynchronous FL framework that utilizes dropout regularization to handle device heterogeneity in distributed settings.
Overall, textttAsyncDrop achieves better performance compared to state of the art asynchronous methodologies.
arXiv Detail & Related papers (2022-10-28T13:00:29Z) - AsyncFedED: Asynchronous Federated Learning with Euclidean Distance
based Adaptive Weight Aggregation [17.57059932879715]
In an asynchronous learning framework, a server updates the global model once it receives an update from a client instead of waiting for all the updates to arrive as in the setting.
A proposed adaptive weight aggregation algorithm, referred to as AsyncFedED, is presented.
arXiv Detail & Related papers (2022-05-27T07:18:11Z) - Asynchronous Iterations in Optimization: New Sequence Results and
Sharper Algorithmic Guarantees [10.984101749941471]
We introduce novel convergence results for asynchronous iterations that appear in the analysis of parallel and distributed optimization algorithms.
Results are simple to apply and give explicit estimates for how the degree of asynchrony impacts the convergence rates of the iterates.
arXiv Detail & Related papers (2021-09-09T19:08:56Z) - Tackling the Objective Inconsistency Problem in Heterogeneous Federated
Optimization [93.78811018928583]
This paper provides a framework to analyze the convergence of federated heterogeneous optimization algorithms.
We propose FedNova, a normalized averaging method that eliminates objective inconsistency while preserving fast error convergence.
arXiv Detail & Related papers (2020-07-15T05:01:23Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.