A Unified Analysis of Federated Learning with Arbitrary Client
Participation
- URL: http://arxiv.org/abs/2205.13648v1
- Date: Thu, 26 May 2022 21:56:31 GMT
- Title: A Unified Analysis of Federated Learning with Arbitrary Client
Participation
- Authors: Shiqiang Wang, Mingyue Ji
- Abstract summary: Federated learning (FL) faces challenges of intermittent client availability and computation/communication efficiency.
It is important to understand how partial client participation affects convergence.
We provide a unified convergence analysis for FL with arbitrary client participation.
- Score: 33.86606068136201
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) faces challenges of intermittent client availability
and computation/communication efficiency. As a result, only a small subset of
clients can participate in FL at a given time. It is important to understand
how partial client participation affects convergence, but most existing works
have either considered idealized participation patterns or obtained results
with non-zero optimality error for generic patterns. In this paper, we provide
a unified convergence analysis for FL with arbitrary client participation. We
first introduce a generalized version of federated averaging (FedAvg) that
amplifies parameter updates at an interval of multiple FL rounds. Then, we
present a novel analysis that captures the effect of client participation in a
single term. By analyzing this term, we obtain convergence upper bounds for a
wide range of participation patterns, including both non-stochastic and
stochastic cases, which match either the lower bound of stochastic gradient
descent (SGD) or the state-of-the-art results in specific settings. We also
discuss various insights, recommendations, and experimental results.
Related papers
- Debiasing Federated Learning with Correlated Client Participation [25.521881752822164]
This paper introduces a theoretical framework that models client participation in FL as a Markov chain.
Every client must wait a minimum number of $R$ rounds (minimum separation) before re-participating.
We develop an effective debiasing algorithm for FedAvg that provably converges to the unbiased optimal solution.
arXiv Detail & Related papers (2024-10-02T03:30:53Z) - Emulating Full Client Participation: A Long-Term Client Selection Strategy for Federated Learning [48.94952630292219]
We propose a novel client selection strategy designed to emulate the performance achieved with full client participation.
In a single round, we select clients by minimizing the gradient-space estimation error between the client subset and the full client set.
In multi-round selection, we introduce a novel individual fairness constraint, which ensures that clients with similar data distributions have similar frequencies of being selected.
arXiv Detail & Related papers (2024-05-22T12:27:24Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - Confidence-aware Personalized Federated Learning via Variational
Expectation Maximization [34.354154518009956]
We present a novel framework for personalized Federated Learning (PFL)
PFL is a distributed learning scheme to train a shared model across clients.
We present a novel framework for PFL based on hierarchical modeling and variational inference.
arXiv Detail & Related papers (2023-05-21T20:12:27Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - FilFL: Client Filtering for Optimized Client Participation in Federated Learning [71.46173076298957]
Federated learning enables clients to collaboratively train a model without exchanging local data.
Clients participating in the training process significantly impact the convergence rate, learning efficiency, and model generalization.
We propose a novel approach, client filtering, to improve model generalization and optimize client participation and training.
arXiv Detail & Related papers (2023-02-13T18:55:31Z) - On the Convergence of Federated Averaging with Cyclic Client
Participation [27.870720693512045]
Averaging (FedAvg) and its variants are the most popular optimization algorithms in federated learning (FL)
Previous convergence analyses of FedAvg assume full client participation or partial client participation where the clients can be uniformly sampled.
In practical cross-device FL systems, only a subset of clients satisfy local criteria such as battery status, network connectivity, and maximum participation frequency requirements (to ensure privacy) are available for training at a given time.
arXiv Detail & Related papers (2023-02-06T20:18:19Z) - SplitGP: Achieving Both Generalization and Personalization in Federated
Learning [31.105681433459285]
SplitGP captures generalization and personalization capabilities for efficient inference across resource-constrained clients.
We analytically characterize the convergence behavior of SplitGP, revealing that all client models approach stationary pointsally.
Experimental results show that SplitGP outperforms existing baselines by wide margins in inference time and test accuracy for varying amounts of out-of-distribution samples.
arXiv Detail & Related papers (2022-12-16T08:37:24Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - FL Games: A federated learning framework for distribution shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL Games, a game-theoretic framework for federated learning for learning causal features that are invariant across clients.
arXiv Detail & Related papers (2022-05-23T07:51:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.