Timely Communication in Federated Learning
- URL: http://arxiv.org/abs/2012.15831v2
- Date: Sat, 13 Mar 2021 16:59:25 GMT
- Title: Timely Communication in Federated Learning
- Authors: Baturalp Buyukates and Sennur Ulukus
- Abstract summary: We consider a global learning framework in which a parameter server (PS) trains a global model by using $n$ clients without actually storing the client data centrally at a cloud server.
Under the proposed scheme, at each iteration, the PS waits for $m$ available clients and sends them the current model.
We find the average age of information experienced by each client and numerically characterize the age-optimal $m$ and $k$ values for a given $n$.
- Score: 65.1253801733098
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider a federated learning framework in which a parameter server (PS)
trains a global model by using $n$ clients without actually storing the client
data centrally at a cloud server. Focusing on a setting where the client
datasets are fast changing and highly temporal in nature, we investigate the
timeliness of model updates and propose a novel timely communication scheme.
Under the proposed scheme, at each iteration, the PS waits for $m$ available
clients and sends them the current model. Then, the PS uses the local updates
of the earliest $k$ out of $m$ clients to update the global model at each
iteration. We find the average age of information experienced by each client
and numerically characterize the age-optimal $m$ and $k$ values for a given
$n$. Our results indicate that, in addition to ensuring timeliness, the
proposed communication scheme results in significantly smaller average
iteration times compared to random client selection without hurting the
convergence of the global learning task.
Related papers
- $r$Age-$k$: Communication-Efficient Federated Learning Using Age Factor [31.285983939625098]
Federated learning (FL) is a collaborative approach where multiple clients, coordinated by a parameter server (PS), train a unified machine-learning model.
This paper introduces a new communication-efficient algorithm that uses the age of information metric to tackle both limitations of FL.
arXiv Detail & Related papers (2024-10-29T16:30:34Z) - CAFe: Cost and Age aware Federated Learning [34.16488071014024]
In many federated learning (FL) models, a common strategy is to wait for at least $M$ clients out of the total $N$ clients to send back their local gradients.
We show that the average age of a client at the PS appears explicitly in the theoretical convergence bound, and therefore, can be used as a metric to quantify the convergence of the global model.
arXiv Detail & Related papers (2024-05-24T17:41:30Z) - FedSampling: A Better Sampling Strategy for Federated Learning [81.85411484302952]
Federated learning (FL) is an important technique for learning models from decentralized data in a privacy-preserving way.
Existing FL methods usually uniformly sample clients for local model learning in each round.
We propose a novel data uniform sampling strategy for federated learning (FedSampling)
arXiv Detail & Related papers (2023-06-25T13:38:51Z) - Timely Asynchronous Hierarchical Federated Learning: Age of Convergence [59.96266198512243]
We consider an asynchronous hierarchical federated learning setting with a client-edge-cloud framework.
The clients exchange the trained parameters with their corresponding edge servers, which update the locally aggregated model.
The goal of each client is to converge to the global model, while maintaining timeliness of the clients.
arXiv Detail & Related papers (2023-06-21T17:39:16Z) - Towards Bias Correction of FedAvg over Nonuniform and Time-Varying
Communications [26.597515045714502]
Federated learning (FL) is a decentralized learning framework wherein a parameter server (PS) and a collection of clients collaboratively train a model via a global objective.
We show that when the channel conditions are heterogeneous across clients are changing over time, the FedFederated Postponed global model fails to postpone the gossip-type information mixing errors.
arXiv Detail & Related papers (2023-06-01T01:52:03Z) - DYNAFED: Tackling Client Data Heterogeneity with Global Dynamics [60.60173139258481]
Local training on non-iid distributed data results in deflected local optimum.
A natural solution is to gather all client data onto the server, such that the server has a global view of the entire data distribution.
In this paper, we put forth an idea to collect and leverage global knowledge on the server without hindering data privacy.
arXiv Detail & Related papers (2022-11-20T06:13:06Z) - Optimizing Server-side Aggregation For Robust Federated Learning via
Subspace Training [80.03567604524268]
Non-IID data distribution across clients and poisoning attacks are two main challenges in real-world federated learning systems.
We propose SmartFL, a generic approach that optimize the server-side aggregation process.
We provide theoretical analyses of the convergence and generalization capacity for SmartFL.
arXiv Detail & Related papers (2022-11-10T13:20:56Z) - A Bayesian Federated Learning Framework with Online Laplace
Approximation [144.7345013348257]
Federated learning allows multiple clients to collaboratively learn a globally shared model.
We propose a novel FL framework that uses online Laplace approximation to approximate posteriors on both the client and server side.
We achieve state-of-the-art results on several benchmarks, clearly demonstrating the advantages of the proposed method.
arXiv Detail & Related papers (2021-02-03T08:36:58Z) - Communication-Efficient Federated Learning via Optimal Client Sampling [20.757477553095637]
Federated learning (FL) ameliorates privacy concerns in settings where a central server coordinates learning from data distributed across many clients.
We propose a novel, simple and efficient way of updating the central model in communication-constrained settings.
We test this policy on a synthetic dataset for logistic regression and two FL benchmarks, namely, a classification task on EMNIST and a realistic language modeling task.
arXiv Detail & Related papers (2020-07-30T02:58:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.