A Unified Analysis of Federated Learning with Arbitrary Client Participation
- URL: http://arxiv.org/abs/2205.13648v4
- Date: Sun, 29 Dec 2024 05:00:26 GMT
- Title: A Unified Analysis of Federated Learning with Arbitrary Client Participation
- Authors: Shiqiang Wang, Mingyue Ji,
- Abstract summary: Federated learning (FL) faces challenges of intermittent client availability and computation/communication efficiency.
It is important to understand how partial client participation affects convergence.
We provide a unified convergence analysis for FL with arbitrary client participation.
- Score: 39.15781847115902
- License:
- Abstract: Federated learning (FL) faces challenges of intermittent client availability and computation/communication efficiency. As a result, only a small subset of clients can participate in FL at a given time. It is important to understand how partial client participation affects convergence, but most existing works have either considered idealized participation patterns or obtained results with non-zero optimality error for generic patterns. In this paper, we provide a unified convergence analysis for FL with arbitrary client participation. We first introduce a generalized version of federated averaging (FedAvg) that amplifies parameter updates at an interval of multiple FL rounds. Then, we present a novel analysis that captures the effect of client participation in a single term. By analyzing this term, we obtain convergence upper bounds for a wide range of participation patterns, including both non-stochastic and stochastic cases, which match either the lower bound of stochastic gradient descent (SGD) or the state-of-the-art results in specific settings. We also discuss various insights, recommendations, and experimental results.
Related papers
- Client-Centric Federated Adaptive Optimization [78.30827455292827]
Federated Learning (FL) is a distributed learning paradigm where clients collaboratively train a model while keeping their own data private.
We propose Federated-Centric Adaptive Optimization, which is a class of novel federated optimization approaches.
arXiv Detail & Related papers (2025-01-17T04:00:50Z) - Over-the-Air Fair Federated Learning via Multi-Objective Optimization [52.295563400314094]
We propose an over-the-air fair federated learning algorithm (OTA-FFL) to train fair FL models.
Experiments demonstrate the superiority of OTA-FFL in achieving fairness and robust performance.
arXiv Detail & Related papers (2025-01-06T21:16:51Z) - Debiasing Federated Learning with Correlated Client Participation [25.521881752822164]
This paper introduces a theoretical framework that models client participation in FL as a Markov chain.
Every client must wait a minimum number of $R$ rounds (minimum separation) before re-participating.
We develop an effective debiasing algorithm for FedAvg that provably converges to the unbiased optimal solution.
arXiv Detail & Related papers (2024-10-02T03:30:53Z) - Emulating Full Client Participation: A Long-Term Client Selection Strategy for Federated Learning [48.94952630292219]
We propose a novel client selection strategy designed to emulate the performance achieved with full client participation.
In a single round, we select clients by minimizing the gradient-space estimation error between the client subset and the full client set.
In multi-round selection, we introduce a novel individual fairness constraint, which ensures that clients with similar data distributions have similar frequencies of being selected.
arXiv Detail & Related papers (2024-05-22T12:27:24Z) - Confidence-aware Personalized Federated Learning via Variational
Expectation Maximization [34.354154518009956]
We present a novel framework for personalized Federated Learning (PFL)
PFL is a distributed learning scheme to train a shared model across clients.
We present a novel framework for PFL based on hierarchical modeling and variational inference.
arXiv Detail & Related papers (2023-05-21T20:12:27Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - On the Convergence of Federated Averaging with Cyclic Client
Participation [27.870720693512045]
Averaging (FedAvg) and its variants are the most popular optimization algorithms in federated learning (FL)
Previous convergence analyses of FedAvg assume full client participation or partial client participation where the clients can be uniformly sampled.
In practical cross-device FL systems, only a subset of clients satisfy local criteria such as battery status, network connectivity, and maximum participation frequency requirements (to ensure privacy) are available for training at a given time.
arXiv Detail & Related papers (2023-02-06T20:18:19Z) - SplitGP: Achieving Both Generalization and Personalization in Federated
Learning [31.105681433459285]
SplitGP captures generalization and personalization capabilities for efficient inference across resource-constrained clients.
We analytically characterize the convergence behavior of SplitGP, revealing that all client models approach stationary pointsally.
Experimental results show that SplitGP outperforms existing baselines by wide margins in inference time and test accuracy for varying amounts of out-of-distribution samples.
arXiv Detail & Related papers (2022-12-16T08:37:24Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.