Debiasing Federated Learning with Correlated Client Participation
- URL: http://arxiv.org/abs/2410.01209v1
- Date: Wed, 2 Oct 2024 03:30:53 GMT
- Title: Debiasing Federated Learning with Correlated Client Participation
- Authors: Zhenyu Sun, Ziyang Zhang, Zheng Xu, Gauri Joshi, Pranay Sharma, Ermin Wei,
- Abstract summary: This paper introduces a theoretical framework that models client participation in FL as a Markov chain.
Every client must wait a minimum number of $R$ rounds (minimum separation) before re-participating.
We develop an effective debiasing algorithm for FedAvg that provably converges to the unbiased optimal solution.
- Score: 25.521881752822164
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In cross-device federated learning (FL) with millions of mobile clients, only a small subset of clients participate in training in every communication round, and Federated Averaging (FedAvg) is the most popular algorithm in practice. Existing analyses of FedAvg usually assume the participating clients are independently sampled in each round from a uniform distribution, which does not reflect real-world scenarios. This paper introduces a theoretical framework that models client participation in FL as a Markov chain to study optimization convergence when clients have non-uniform and correlated participation across rounds. We apply this framework to analyze a more general and practical pattern: every client must wait a minimum number of $R$ rounds (minimum separation) before re-participating. We theoretically prove and empirically observe that increasing minimum separation reduces the bias induced by intrinsic non-uniformity of client availability in cross-device FL systems. Furthermore, we develop an effective debiasing algorithm for FedAvg that provably converges to the unbiased optimal solution under arbitrary minimum separation and unknown client availability distribution.
Related papers
- Achieving Linear Speedup in Asynchronous Federated Learning with
Heterogeneous Clients [30.135431295658343]
Federated learning (FL) aims to learn a common global model without exchanging or transferring the data that are stored locally at different clients.
In this paper, we propose an efficient federated learning (AFL) framework called DeFedAvg.
DeFedAvg is the first AFL algorithm that achieves the desirable linear speedup property, which indicates its high scalability.
arXiv Detail & Related papers (2024-02-17T05:22:46Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - On the Convergence of Federated Averaging with Cyclic Client
Participation [27.870720693512045]
Averaging (FedAvg) and its variants are the most popular optimization algorithms in federated learning (FL)
Previous convergence analyses of FedAvg assume full client participation or partial client participation where the clients can be uniformly sampled.
In practical cross-device FL systems, only a subset of clients satisfy local criteria such as battery status, network connectivity, and maximum participation frequency requirements (to ensure privacy) are available for training at a given time.
arXiv Detail & Related papers (2023-02-06T20:18:19Z) - Beyond ADMM: A Unified Client-variance-reduced Adaptive Federated
Learning Framework [82.36466358313025]
We propose a primal-dual FL algorithm, termed FedVRA, that allows one to adaptively control the variance-reduction level and biasness of the global model.
Experiments based on (semi-supervised) image classification tasks demonstrate superiority of FedVRA over the existing schemes.
arXiv Detail & Related papers (2022-12-03T03:27:51Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - A Unified Analysis of Federated Learning with Arbitrary Client
Participation [33.86606068136201]
Federated learning (FL) faces challenges of intermittent client availability and computation/communication efficiency.
It is important to understand how partial client participation affects convergence.
We provide a unified convergence analysis for FL with arbitrary client participation.
arXiv Detail & Related papers (2022-05-26T21:56:31Z) - FL Games: A federated learning framework for distribution shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL Games, a game-theoretic framework for federated learning for learning causal features that are invariant across clients.
arXiv Detail & Related papers (2022-05-23T07:51:45Z) - Distributed Non-Convex Optimization with Sublinear Speedup under
Intermittent Client Availability [46.85205907718874]
Federated learning is a new machine learning framework, where a bunch of clients collaboratively train a model without sharing training data.
In this work, we consider a practical and issue when deploying federated learning in intermittent mobile environments.
We propose a simple distributed nonlinear optimization algorithm, called Federated Latest Averaging (FedLaAvg for short)
Our theoretical analysis shows that FedLaAvg attains the convergence rate of $(E1/2/(NT1/2)$, achieving a sublinear speed with respect to the total number of clients.
arXiv Detail & Related papers (2020-02-18T06:32:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.