FedGP: Correlation-Based Active Client Selection for Heterogeneous
Federated Learning
- URL: http://arxiv.org/abs/2103.13822v1
- Date: Wed, 24 Mar 2021 03:25:14 GMT
- Title: FedGP: Correlation-Based Active Client Selection for Heterogeneous
Federated Learning
- Authors: Minxue Tang, Xuefei Ning, Yitu Wang, Yu Wang and Yiran Chen
- Abstract summary: We propose FedGP -- a federated learning framework built on a correlation-based client selection strategy.
We develop a GP training method utilizing the historical samples efficiently to reduce the communication cost.
Based on the correlations we learned, we derive the client selection with an enlarged reduction of expected global loss in each round.
- Score: 33.996041254246585
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Client-wise heterogeneity is one of the major issues that hinder effective
training in federated learning (FL). Since the data distribution on each client
may differ dramatically, the client selection strategy can largely influence
the convergence rate of the FL process. Several recent studies adopt active
client selection strategies. However, they neglect the loss correlations
between the clients and achieve marginal improvement compared to the uniform
selection strategy. In this work, we propose FedGP -- a federated learning
framework built on a correlation-based client selection strategy, to boost the
convergence rate of FL. Specifically, we first model the loss correlations
between the clients with a Gaussian Process (GP). To make the GP training
feasible in the communication-bounded FL process, we develop a GP training
method utilizing the historical samples efficiently to reduce the communication
cost. Finally, based on the correlations we learned, we derive the client
selection with an enlarged reduction of expected global loss in each round. Our
experimental results show that compared to the latest active client selection
strategy, FedGP can improve the convergence rates by $1.3\sim2.3\times$ and
$1.2\sim1.4\times$ on FMNIST and CIFAR-10, respectively.
Related papers
- Emulating Full Client Participation: A Long-Term Client Selection Strategy for Federated Learning [48.94952630292219]
We propose a novel client selection strategy designed to emulate the performance achieved with full client participation.
In a single round, we select clients by minimizing the gradient-space estimation error between the client subset and the full client set.
In multi-round selection, we introduce a novel individual fairness constraint, which ensures that clients with similar data distributions have similar frequencies of being selected.
arXiv Detail & Related papers (2024-05-22T12:27:24Z) - Federated Learning Can Find Friends That Are Advantageous [14.993730469216546]
In Federated Learning (FL), the distributed nature and heterogeneity of client data present both opportunities and challenges.
We introduce a novel algorithm that assigns adaptive aggregation weights to clients participating in FL training, identifying those with data distributions most conducive to a specific learning objective.
arXiv Detail & Related papers (2024-02-07T17:46:37Z) - Utilizing Free Clients in Federated Learning for Focused Model
Enhancement [9.370655190768163]
Federated Learning (FL) is a distributed machine learning approach to learn models on decentralized heterogeneous data.
We present FedALIGN (Federated Adaptive Learning with Inclusion of Global Needs) to address this challenge.
arXiv Detail & Related papers (2023-10-06T18:23:40Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - FilFL: Client Filtering for Optimized Client Participation in Federated Learning [71.46173076298957]
Federated learning enables clients to collaboratively train a model without exchanging local data.
Clients participating in the training process significantly impact the convergence rate, learning efficiency, and model generalization.
We propose a novel approach, client filtering, to improve model generalization and optimize client participation and training.
arXiv Detail & Related papers (2023-02-13T18:55:31Z) - Adaptive Control of Client Selection and Gradient Compression for
Efficient Federated Learning [28.185096784982544]
Federated learning (FL) allows multiple clients cooperatively train models without disclosing local data.
We propose a heterogeneous-aware FL framework, called FedCG, with adaptive client selection and gradient compression.
Experiments on both real-world prototypes and simulations show that FedCG can provide up to 5.3$times$ speedup compared to other methods.
arXiv Detail & Related papers (2022-12-19T14:19:07Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - FedFM: Anchor-based Feature Matching for Data Heterogeneity in Federated
Learning [91.74206675452888]
We propose a novel method FedFM, which guides each client's features to match shared category-wise anchors.
To achieve higher efficiency and flexibility, we propose a FedFM variant, called FedFM-Lite, where clients communicate with server with fewer synchronization times and communication bandwidth costs.
arXiv Detail & Related papers (2022-10-14T08:11:34Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - Bandit-based Communication-Efficient Client Selection Strategies for
Federated Learning [8.627405016032615]
We present a bandit-based communication-efficient client selection strategy UCB-CS that achieves faster convergence with lower communication overhead.
We also demonstrate how client selection can be used to improve fairness.
arXiv Detail & Related papers (2020-12-14T23:35:03Z) - Client Selection in Federated Learning: Convergence Analysis and
Power-of-Choice Selection Strategies [29.127689561987964]
Federated learning enables a large number of resource-limited client nodes to cooperatively train a model without data sharing.
We show that biasing client selection towards clients with higher local loss achieves faster error convergence.
We propose Power-of-Choice, a communication- and computation-efficient client selection framework.
arXiv Detail & Related papers (2020-10-03T01:04:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.