AsyncFedED: Asynchronous Federated Learning with Euclidean Distance
based Adaptive Weight Aggregation
- URL: http://arxiv.org/abs/2205.13797v1
- Date: Fri, 27 May 2022 07:18:11 GMT
- Title: AsyncFedED: Asynchronous Federated Learning with Euclidean Distance
based Adaptive Weight Aggregation
- Authors: Qiyuan Wang, Qianqian Yang, Shibo He, Zhiguo Shui, Jiming Chen
- Abstract summary: In an asynchronous learning framework, a server updates the global model once it receives an update from a client instead of waiting for all the updates to arrive as in the setting.
A proposed adaptive weight aggregation algorithm, referred to as AsyncFedED, is presented.
- Score: 17.57059932879715
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In an asynchronous federated learning framework, the server updates the
global model once it receives an update from a client instead of waiting for
all the updates to arrive as in the synchronous setting. This allows
heterogeneous devices with varied computing power to train the local models
without pausing, thereby speeding up the training process. However, it
introduces the stale model problem, where the newly arrived update was
calculated based on a set of stale weights that are older than the current
global model, which may hurt the convergence of the model. In this paper, we
present an asynchronous federated learning framework with a proposed adaptive
weight aggregation algorithm, referred to as AsyncFedED. To the best of our
knowledge this aggregation method is the first to take the staleness of the
arrived gradients, measured by the Euclidean distance between the stale model
and the current global model, and the number of local epochs that have been
performed, into account. Assuming general non-convex loss functions, we prove
the convergence of the proposed method theoretically. Numerical results
validate the effectiveness of the proposed AsyncFedED in terms of the
convergence rate and model accuracy compared to the existing methods for three
considered tasks.
Related papers
- FedStaleWeight: Buffered Asynchronous Federated Learning with Fair Aggregation via Staleness Reweighting [9.261784956541641]
Asynchronous Federated Learning (AFL) methods have emerged as promising alternatives to their synchronous counterparts by the slowest agent.
AFL model training heavily towards agents who can produce updates faster, leaving slower agents behind.
We introduce FedStaleWeight, an algorithm addressing in aggregating asynchronous client updates by employing average staleness to compute fair re-weightings.
arXiv Detail & Related papers (2024-06-05T02:52:22Z) - Stragglers-Aware Low-Latency Synchronous Federated Learning via Layer-Wise Model Updates [71.81037644563217]
Synchronous federated learning (FL) is a popular paradigm for collaborative edge learning.
As some of the devices may have limited computational resources and varying availability, FL latency is highly sensitive to stragglers.
We propose straggler-aware layer-wise federated learning (SALF) that leverages the optimization procedure of NNs via backpropagation to update the global model in a layer-wise fashion.
arXiv Detail & Related papers (2024-03-27T09:14:36Z) - Mitigating System Bias in Resource Constrained Asynchronous Federated
Learning Systems [2.8790600498444032]
We propose a dynamic global model aggregation method within Asynchronous Federated Learning (AFL) deployments.
Our method scores and adjusts the weighting of client model updates based on their upload frequency to accommodate differences in device capabilities.
arXiv Detail & Related papers (2024-01-24T10:51:15Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Aggregation Weighting of Federated Learning via Generalization Bound
Estimation [65.8630966842025]
Federated Learning (FL) typically aggregates client model parameters using a weighting approach determined by sample proportions.
We replace the aforementioned weighting method with a new strategy that considers the generalization bounds of each local model.
arXiv Detail & Related papers (2023-11-10T08:50:28Z) - Time-series Generation by Contrastive Imitation [87.51882102248395]
We study a generative framework that seeks to combine the strengths of both: Motivated by a moment-matching objective to mitigate compounding error, we optimize a local (but forward-looking) transition policy.
At inference, the learned policy serves as the generator for iterative sampling, and the learned energy serves as a trajectory-level measure for evaluating sample quality.
arXiv Detail & Related papers (2023-11-02T16:45:25Z) - Consensus-Adaptive RANSAC [104.87576373187426]
We propose a new RANSAC framework that learns to explore the parameter space by considering the residuals seen so far via a novel attention layer.
The attention mechanism operates on a batch of point-to-model residuals, and updates a per-point estimation state to take into account the consensus found through a lightweight one-step transformer.
arXiv Detail & Related papers (2023-07-26T08:25:46Z) - Straggler-Resilient Decentralized Learning via Adaptive Asynchronous Updates [28.813671194939225]
fully decentralized optimization methods have been advocated as alternatives to the popular parameter server framework.
We propose a fully decentralized algorithm with adaptive asynchronous updates via adaptively determining the number of neighbor workers for each worker to communicate with.
We show that DSGD-AAU achieves a linear speedup for convergence and demonstrate its effectiveness via extensive experiments.
arXiv Detail & Related papers (2023-06-11T02:08:59Z) - Blind Asynchronous Over-the-Air Federated Edge Learning [15.105440618101147]
Federated Edge Learning (FEEL) is a distributed machine learning technique.
We propose a novel synchronization-free method to recover the parameters of the global model over the air.
Our proposed algorithm is close to the ideal synchronized scenario by $10%$, and performs $4times$ better than the simple case.
arXiv Detail & Related papers (2022-10-31T16:54:14Z) - From Deterioration to Acceleration: A Calibration Approach to
Rehabilitating Step Asynchronism in Federated Optimization [13.755421424240048]
We propose a new algorithm textttFedaGrac, which calibrates the local direction to a predictive global orientation.
We theoretically prove that textttFedaGrac holds an improved order of convergence rate than the state-of-the-art approaches.
arXiv Detail & Related papers (2021-12-17T07:26:31Z) - Tackling the Objective Inconsistency Problem in Heterogeneous Federated
Optimization [93.78811018928583]
This paper provides a framework to analyze the convergence of federated heterogeneous optimization algorithms.
We propose FedNova, a normalized averaging method that eliminates objective inconsistency while preserving fast error convergence.
arXiv Detail & Related papers (2020-07-15T05:01:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.