Version age-based client scheduling policy for federated learning
- URL: http://arxiv.org/abs/2402.05407v1
- Date: Thu, 8 Feb 2024 04:48:51 GMT
- Title: Version age-based client scheduling policy for federated learning
- Authors: Xinyi Hu, Nikolaos Pappas, Howard H. Yang
- Abstract summary: Federated Learning (FL) has emerged as a privacy-preserving machine learning paradigm facilitating collaborative training across multiple clients without sharing local data.
Despite advancements in edge device capabilities, communication bottlenecks present challenges in aggregating a large number of clients.
This phenomenon introduces the critical challenge of stragglers in FL and the profound impact of client scheduling policies on global model convergence and stability.
- Score: 25.835001146856396
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) has emerged as a privacy-preserving machine learning
paradigm facilitating collaborative training across multiple clients without
sharing local data. Despite advancements in edge device capabilities,
communication bottlenecks present challenges in aggregating a large number of
clients; only a portion of the clients can update their parameters upon each
global aggregation. This phenomenon introduces the critical challenge of
stragglers in FL and the profound impact of client scheduling policies on
global model convergence and stability. Existing scheduling strategies address
staleness but predominantly focus on either timeliness or content. Motivated by
this, we introduce the novel concept of Version Age of Information (VAoI) to
FL. Unlike traditional Age of Information metrics, VAoI considers both
timeliness and content staleness. Each client's version age is updated
discretely, indicating the freshness of information. VAoI is incorporated into
the client scheduling policy to minimize the average VAoI, mitigating the
impact of outdated local updates and enhancing the stability of FL systems.
Related papers
- An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Value of Information and Timing-aware Scheduling for Federated Learning [24.40692354824834]
Federated Learning (FL) offers a solution by preserving data privacy during training.
FL brings the model directly to User Equipments (UEs) for local training by an access point (AP)
arXiv Detail & Related papers (2023-12-16T17:51:22Z) - Re-Weighted Softmax Cross-Entropy to Control Forgetting in Federated
Learning [14.196701066823499]
In Federated Learning, a global model is learned by aggregating model updates computed at a set of independent client nodes.
We show that individual client models experience a catastrophic forgetting with respect to data from other clients.
We propose an efficient approach that modifies the cross-entropy objective on a per-client basis by re-weighting the softmax logits prior to computing the loss.
arXiv Detail & Related papers (2023-04-11T14:51:55Z) - Personalized Privacy-Preserving Framework for Cross-Silo Federated
Learning [0.0]
Federated learning (FL) is a promising decentralized deep learning (DL) framework that enables DL-based approaches trained collaboratively across clients without sharing private data.
In this paper, we propose a novel framework, namely Personalized Privacy-Preserving Federated Learning (PPPFL)
Our proposed framework outperforms multiple FL baselines on different datasets, including MNIST, Fashion-MNIST, CIFAR-10, and CIFAR-100.
arXiv Detail & Related papers (2023-02-22T07:24:08Z) - Improving Privacy-Preserving Vertical Federated Learning by Efficient Communication with ADMM [62.62684911017472]
Federated learning (FL) enables devices to jointly train shared models while keeping the training data local for privacy purposes.
We introduce a VFL framework with multiple heads (VIM), which takes the separate contribution of each client into account.
VIM achieves significantly higher performance and faster convergence compared with the state-of-the-art.
arXiv Detail & Related papers (2022-07-20T23:14:33Z) - Robust Quantity-Aware Aggregation for Federated Learning [72.59915691824624]
Malicious clients can poison model updates and claim large quantities to amplify the impact of their model updates in the model aggregation.
Existing defense methods for FL, while all handling malicious model updates, either treat all quantities benign or simply ignore/truncate the quantities of all clients.
We propose a robust quantity-aware aggregation algorithm for federated learning, called FedRA, to perform the aggregation with awareness of local data quantities.
arXiv Detail & Related papers (2022-05-22T15:13:23Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Communication-Efficient Federated Learning with Accelerated Client Gradient [46.81082897703729]
Federated learning often suffers from slow and unstable convergence due to the heterogeneous characteristics of participating client datasets.
We propose a simple but effective federated learning framework, which improves the consistency across clients and facilitates the convergence of the server model.
We provide the theoretical convergence rate of our algorithm and demonstrate remarkable performance gains in terms of accuracy and communication efficiency.
arXiv Detail & Related papers (2022-01-10T05:31:07Z) - Personalized Retrogress-Resilient Framework for Real-World Medical
Federated Learning [8.240098954377794]
We propose a personalized retrogress-resilient framework to produce a superior personalized model for each client.
Our experiments on real-world dermoscopic FL dataset prove that our personalized retrogress-resilient framework outperforms state-of-the-art FL methods.
arXiv Detail & Related papers (2021-10-01T13:24:29Z) - Federated Noisy Client Learning [105.00756772827066]
Federated learning (FL) collaboratively aggregates a shared global model depending on multiple local clients.
Standard FL methods ignore the noisy client issue, which may harm the overall performance of the aggregated model.
We propose Federated Noisy Client Learning (Fed-NCL), which is a plug-and-play algorithm and contains two main components.
arXiv Detail & Related papers (2021-06-24T11:09:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.