Federated Learning with Regularized Client Participation
- URL: http://arxiv.org/abs/2302.03662v1
- Date: Tue, 7 Feb 2023 18:26:07 GMT
- Title: Federated Learning with Regularized Client Participation
- Authors: Grigory Malinovsky, Samuel Horv\'ath, Konstantin Burlachenko, Peter
Richt\'arik
- Abstract summary: Federated Learning (FL) is a distributed machine learning approach where multiple clients work together to solve a machine learning task.
One of the key challenges in FL is the issue of partial participation, which occurs when a large number of clients are involved in the training process.
We propose a new technique and design a novel regularized client participation scheme.
- Score: 1.433758865948252
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is a distributed machine learning approach where
multiple clients work together to solve a machine learning task. One of the key
challenges in FL is the issue of partial participation, which occurs when a
large number of clients are involved in the training process. The traditional
method to address this problem is randomly selecting a subset of clients at
each communication round. In our research, we propose a new technique and
design a novel regularized client participation scheme. Under this scheme, each
client joins the learning process every $R$ communication rounds, which we
refer to as a meta epoch. We have found that this participation scheme leads to
a reduction in the variance caused by client sampling. Combined with the
popular FedAvg algorithm (McMahan et al., 2017), it results in superior rates
under standard assumptions. For instance, the optimization term in our main
convergence bound decreases linearly with the product of the number of
communication rounds and the size of the local dataset of each client, and the
statistical term scales with step size quadratically instead of linearly (the
case for client sampling with replacement), leading to better convergence rate
$\mathcal{O}\left(\frac{1}{T^2}\right)$ compared to
$\mathcal{O}\left(\frac{1}{T}\right)$, where $T$ is the total number of
communication rounds. Furthermore, our results permit arbitrary client
availability as long as each client is available for training once per each
meta epoch.
Related papers
- Federated Learning under Periodic Client Participation and Heterogeneous Data: A New Communication-Efficient Algorithm and Analysis [14.98493572536424]
In federated learning, it is common to assume that clients are always available to participate in training, which may not be feasible with user devices in practice.
Recent analyze federated learning under more realistic participation patterns as cyclic client availability or arbitrary participation.
arXiv Detail & Related papers (2024-10-30T15:41:35Z) - $\mathsf{OPA}$: One-shot Private Aggregation with Single Client Interaction and its Applications to Federated Learning [6.977111770337479]
We introduce One-shot Private Aggregation ($mathsfOPA$) where clients speak only once (or even choose not to speak) per aggregation evaluation.
Since each client communicates only once per aggregation, this simplifies managing dropouts and dynamic participation.
$mathsfOPA$ is practical, outperforming state-of-the-art solutions.
arXiv Detail & Related papers (2024-10-29T17:50:11Z) - Cohort Squeeze: Beyond a Single Communication Round per Cohort in Cross-Device Federated Learning [51.560590617691005]
We investigate whether it is possible to squeeze more juice" out of each cohort than what is possible in a single communication round.
Our approach leads to up to 74% reduction in the total communication cost needed to train a FL model in the cross-device setting.
arXiv Detail & Related papers (2024-06-03T08:48:49Z) - SPAM: Stochastic Proximal Point Method with Momentum Variance Reduction for Non-convex Cross-Device Federated Learning [48.072207894076556]
Cross-device training is a subfield of learning where the number of clients can reach into the billions.
Standard approaches and local methods are prone to issues as crucial as cross-device similarity.
Our method is the first in its kind, that does not require the objective and provably benefits from clients having similar data.
arXiv Detail & Related papers (2024-05-30T15:07:30Z) - LEFL: Low Entropy Client Sampling in Federated Learning [6.436397118145477]
Federated learning (FL) is a machine learning paradigm where multiple clients collaborate to optimize a single global model using their private data.
We propose LEFL, an alternative sampling strategy by performing a one-time clustering of clients based on their model's learned high-level features.
We show of sampled clients selected with this approach yield a low relative entropy with respect to the global data distribution.
arXiv Detail & Related papers (2023-12-29T01:44:20Z) - FedSampling: A Better Sampling Strategy for Federated Learning [81.85411484302952]
Federated learning (FL) is an important technique for learning models from decentralized data in a privacy-preserving way.
Existing FL methods usually uniformly sample clients for local model learning in each round.
We propose a novel data uniform sampling strategy for federated learning (FedSampling)
arXiv Detail & Related papers (2023-06-25T13:38:51Z) - Optimizing Server-side Aggregation For Robust Federated Learning via
Subspace Training [80.03567604524268]
Non-IID data distribution across clients and poisoning attacks are two main challenges in real-world federated learning systems.
We propose SmartFL, a generic approach that optimize the server-side aggregation process.
We provide theoretical analyses of the convergence and generalization capacity for SmartFL.
arXiv Detail & Related papers (2022-11-10T13:20:56Z) - A Bayesian Federated Learning Framework with Online Laplace
Approximation [144.7345013348257]
Federated learning allows multiple clients to collaboratively learn a globally shared model.
We propose a novel FL framework that uses online Laplace approximation to approximate posteriors on both the client and server side.
We achieve state-of-the-art results on several benchmarks, clearly demonstrating the advantages of the proposed method.
arXiv Detail & Related papers (2021-02-03T08:36:58Z) - Timely Communication in Federated Learning [65.1253801733098]
We consider a global learning framework in which a parameter server (PS) trains a global model by using $n$ clients without actually storing the client data centrally at a cloud server.
Under the proposed scheme, at each iteration, the PS waits for $m$ available clients and sends them the current model.
We find the average age of information experienced by each client and numerically characterize the age-optimal $m$ and $k$ values for a given $n$.
arXiv Detail & Related papers (2020-12-31T18:52:08Z) - Optimal Client Sampling for Federated Learning [0.0]
We restrict the number of clients allowed to communicate their updates back to the master node.
In each communication round, all participating clients compute their updates, but only the ones with "important" updates communicate back to the master.
We show that importance can be measured using only the norm of the update and give a formula for optimal client participation.
arXiv Detail & Related papers (2020-10-26T17:05:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.