Enhancing Convergence in Federated Learning: A Contribution-Aware
Asynchronous Approach
- URL: http://arxiv.org/abs/2402.10991v4
- Date: Mon, 4 Mar 2024 03:35:40 GMT
- Title: Enhancing Convergence in Federated Learning: A Contribution-Aware
Asynchronous Approach
- Authors: Changxin Xu, Yuxin Qiao, Zhanxin Zhou, Fanghao Ni, and Jize Xiong
- Abstract summary: Federated Learning (FL) is a distributed machine learning paradigm that allows clients to train models on their data while preserving their privacy.
FL algorithms, such as Federated Averaging (FedAvg) and its variants, have been shown to converge well in many scenarios.
However, these methods require clients to upload their local updates to the server in a synchronous manner, which can be slow and unreliable in realistic FL settings.
We propose a contribution-aware asynchronous FL method that takes into account the staleness and statistical heterogeneity of the received updates.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) is a distributed machine learning paradigm that
allows clients to train models on their data while preserving their privacy. FL
algorithms, such as Federated Averaging (FedAvg) and its variants, have been
shown to converge well in many scenarios. However, these methods require
clients to upload their local updates to the server in a synchronous manner,
which can be slow and unreliable in realistic FL settings. To address this
issue, researchers have developed asynchronous FL methods that allow clients to
continue training on their local data using a stale global model. However, most
of these methods simply aggregate all of the received updates without
considering their relative contributions, which can slow down convergence. In
this paper, we propose a contribution-aware asynchronous FL method that takes
into account the staleness and statistical heterogeneity of the received
updates. Our method dynamically adjusts the contribution of each update based
on these factors, which can speed up convergence compared to existing methods.
Related papers
- FedAST: Federated Asynchronous Simultaneous Training [27.492821176616815]
Federated Learning (FL) enables devices or clients to collaboratively train machine learning (ML) models without sharing their private data.
Much of the existing work in FL focuses on efficiently learning a model for a single task.
In this paper, we propose simultaneous training of multiple FL models using a common set of datasets.
arXiv Detail & Related papers (2024-06-01T05:14:20Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Stragglers-Aware Low-Latency Synchronous Federated Learning via Layer-Wise Model Updates [71.81037644563217]
Synchronous federated learning (FL) is a popular paradigm for collaborative edge learning.
As some of the devices may have limited computational resources and varying availability, FL latency is highly sensitive to stragglers.
We propose straggler-aware layer-wise federated learning (SALF) that leverages the optimization procedure of NNs via backpropagation to update the global model in a layer-wise fashion.
arXiv Detail & Related papers (2024-03-27T09:14:36Z) - Achieving Linear Speedup in Asynchronous Federated Learning with
Heterogeneous Clients [30.135431295658343]
Federated learning (FL) aims to learn a common global model without exchanging or transferring the data that are stored locally at different clients.
In this paper, we propose an efficient federated learning (AFL) framework called DeFedAvg.
DeFedAvg is the first AFL algorithm that achieves the desirable linear speedup property, which indicates its high scalability.
arXiv Detail & Related papers (2024-02-17T05:22:46Z) - Momentum Approximation in Asynchronous Private Federated Learning [26.57367597853813]
momentum approximation can achieve $1.15 textrm--4times$ speed up in convergence compared to existing FLs with momentum.
Momentum approximation can be easily integrated in production FL systems with a minor communication and storage cost.
arXiv Detail & Related papers (2024-02-14T15:35:53Z) - FLASH: Federated Learning Across Simultaneous Heterogeneities [54.80435317208111]
FLASH(Federated Learning Across Simultaneous Heterogeneities) is a lightweight and flexible client selection algorithm.
It outperforms state-of-the-art FL frameworks under extensive sources of Heterogeneities.
It achieves substantial and consistent improvements over state-of-the-art baselines.
arXiv Detail & Related papers (2024-02-13T20:04:39Z) - Mitigating System Bias in Resource Constrained Asynchronous Federated
Learning Systems [2.8790600498444032]
We propose a dynamic global model aggregation method within Asynchronous Federated Learning (AFL) deployments.
Our method scores and adjusts the weighting of client model updates based on their upload frequency to accommodate differences in device capabilities.
arXiv Detail & Related papers (2024-01-24T10:51:15Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - TimelyFL: Heterogeneity-aware Asynchronous Federated Learning with
Adaptive Partial Training [17.84692242938424]
TimelyFL is a heterogeneous-aware asynchronous Federated Learning framework with adaptive partial training.
We show that TimelyFL improves participation rate by 21.13%, harvests 1.28x - 2.89x more efficiency on convergence rate, and provides a 6.25% increment on test accuracy.
arXiv Detail & Related papers (2023-04-14T06:26:08Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Over-the-Air Federated Learning from Heterogeneous Data [107.05618009955094]
Federated learning (FL) is a framework for distributed learning of centralized models.
We develop a Convergent OTA FL (COTAF) algorithm which enhances the common local gradient descent (SGD) FL algorithm.
We numerically show that the precoding induced by COTAF notably improves the convergence rate and the accuracy of models trained via OTA FL.
arXiv Detail & Related papers (2020-09-27T08:28:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.