Communication-Efficient Federated Learning via Optimal Client Sampling
- URL: http://arxiv.org/abs/2007.15197v2
- Date: Wed, 14 Oct 2020 19:08:30 GMT
- Title: Communication-Efficient Federated Learning via Optimal Client Sampling
- Authors: Monica Ribero, Haris Vikalo
- Abstract summary: Federated learning (FL) ameliorates privacy concerns in settings where a central server coordinates learning from data distributed across many clients.
We propose a novel, simple and efficient way of updating the central model in communication-constrained settings.
We test this policy on a synthetic dataset for logistic regression and two FL benchmarks, namely, a classification task on EMNIST and a realistic language modeling task.
- Score: 20.757477553095637
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) ameliorates privacy concerns in settings where a
central server coordinates learning from data distributed across many clients.
The clients train locally and communicate the models they learn to the server;
aggregation of local models requires frequent communication of large amounts of
information between the clients and the central server. We propose a novel,
simple and efficient way of updating the central model in
communication-constrained settings based on collecting models from clients with
informative updates and estimating local updates that were not communicated. In
particular, modeling the progression of model's weights by an
Ornstein-Uhlenbeck process allows us to derive an optimal sampling strategy for
selecting a subset of clients with significant weight updates. The central
server collects updated local models from only the selected clients and
combines them with estimated model updates of the clients that were not
selected for communication. We test this policy on a synthetic dataset for
logistic regression and two FL benchmarks, namely, a classification task on
EMNIST and a realistic language modeling task using the Shakespeare dataset.
The results demonstrate that the proposed framework provides significant
reduction in communication while maintaining competitive or achieving superior
performance compared to a baseline. Our method represents a new line of
strategies for communication-efficient FL that is orthogonal to the existing
user-local methods such as quantization or sparsification, thus complementing
rather than aiming to replace those existing methods.
Related papers
- Towards Client Driven Federated Learning [7.528642177161784]
We introduce Client-Driven Federated Learning (CDFL), a novel FL framework that puts clients at the driving role.
In CDFL, each client independently and asynchronously updates its model by uploading the locally trained model to the server and receiving a customized model tailored to its local task.
arXiv Detail & Related papers (2024-05-24T10:17:49Z) - FedNet2Net: Saving Communication and Computations in Federated Learning
with Model Growing [0.0]
Federated learning (FL) is a recently developed area of machine learning.
In this paper, a novel scheme based on the notion of "model growing" is proposed.
The proposed approach is tested extensively on three standard benchmarks and is shown to achieve substantial reduction in communication and client computation.
arXiv Detail & Related papers (2022-07-19T21:54:53Z) - DisPFL: Towards Communication-Efficient Personalized Federated Learning
via Decentralized Sparse Training [84.81043932706375]
We propose a novel personalized federated learning framework in a decentralized (peer-to-peer) communication protocol named Dis-PFL.
Dis-PFL employs personalized sparse masks to customize sparse local models on the edge.
We demonstrate that our method can easily adapt to heterogeneous local clients with varying computation complexities.
arXiv Detail & Related papers (2022-06-01T02:20:57Z) - Over-The-Air Federated Learning under Byzantine Attacks [43.67333971183711]
Federated learning (FL) is a promising solution to enable many AI applications.
FL allows the clients to participate in the training phase, governed by a central server, without sharing their local data.
One of the main challenges of FL is the communication overhead.
We propose a transmission and aggregation framework to reduce the effect of such attacks.
arXiv Detail & Related papers (2022-05-05T22:09:21Z) - FedLite: A Scalable Approach for Federated Learning on
Resource-constrained Clients [41.623518032533035]
In split learning, only a small part of the model is stored and trained on clients while the remaining large part of the model only stays at the servers.
This paper addresses this issue by compressing the additional communication using a novel clustering scheme accompanied by a gradient correction method.
arXiv Detail & Related papers (2022-01-28T00:09:53Z) - ADDS: Adaptive Differentiable Sampling for Robust Multi-Party Learning [24.288233074516455]
We propose a novel adaptive differentiable sampling framework (ADDS) for robust and communication-efficient multi-party learning.
The proposed framework significantly reduces local computation and communication costs while speeding up the central model convergence.
arXiv Detail & Related papers (2021-10-29T03:35:15Z) - FedKD: Communication Efficient Federated Learning via Knowledge
Distillation [56.886414139084216]
Federated learning is widely used to learn intelligent models from decentralized data.
In federated learning, clients need to communicate their local model updates in each iteration of model learning.
We propose a communication efficient federated learning method based on knowledge distillation.
arXiv Detail & Related papers (2021-08-30T15:39:54Z) - Federated Multi-Target Domain Adaptation [99.93375364579484]
Federated learning methods enable us to train machine learning models on distributed user data while preserving its privacy.
We consider a more practical scenario where the distributed client data is unlabeled, and a centralized labeled dataset is available on the server.
We propose an effective DualAdapt method to address the new challenges.
arXiv Detail & Related papers (2021-08-17T17:53:05Z) - A Bayesian Federated Learning Framework with Online Laplace
Approximation [144.7345013348257]
Federated learning allows multiple clients to collaboratively learn a globally shared model.
We propose a novel FL framework that uses online Laplace approximation to approximate posteriors on both the client and server side.
We achieve state-of-the-art results on several benchmarks, clearly demonstrating the advantages of the proposed method.
arXiv Detail & Related papers (2021-02-03T08:36:58Z) - Timely Communication in Federated Learning [65.1253801733098]
We consider a global learning framework in which a parameter server (PS) trains a global model by using $n$ clients without actually storing the client data centrally at a cloud server.
Under the proposed scheme, at each iteration, the PS waits for $m$ available clients and sends them the current model.
We find the average age of information experienced by each client and numerically characterize the age-optimal $m$ and $k$ values for a given $n$.
arXiv Detail & Related papers (2020-12-31T18:52:08Z) - Personalized Federated Learning with First Order Model Optimization [76.81546598985159]
We propose an alternative to federated learning, where each client federates with other relevant clients to obtain a stronger model per client-specific objectives.
We do not assume knowledge of underlying data distributions or client similarities, and allow each client to optimize for arbitrary target distributions of interest.
Our method outperforms existing alternatives, while also enabling new features for personalized FL such as transfer outside of local data distributions.
arXiv Detail & Related papers (2020-12-15T19:30:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.