An Efficiency-boosting Client Selection Scheme for Federated Learning
with Fairness Guarantee
- URL: http://arxiv.org/abs/2011.01783v5
- Date: Mon, 4 Sep 2023 19:24:45 GMT
- Title: An Efficiency-boosting Client Selection Scheme for Federated Learning
with Fairness Guarantee
- Authors: Tiansheng Huang, Weiwei Lin, Wentai Wu, Ligang He, Keqin Li and Albert
Y.Zomaya
- Abstract summary: Federated Learning is a new paradigm to cope with the privacy issue by allowing clients to perform model training locally.
The client selection policy is critical to an FL process in terms of training efficiency, the final model's quality as well as fairness.
In this paper, we will model the fairness guaranteed client selection as a Lyapunov optimization problem and then a C2MAB-based method is proposed for estimation of the model exchange time.
- Score: 36.07970788489
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The issue of potential privacy leakage during centralized AI's model training
has drawn intensive concern from the public. A Parallel and Distributed
Computing (or PDC) scheme, termed Federated Learning (FL), has emerged as a new
paradigm to cope with the privacy issue by allowing clients to perform model
training locally, without the necessity to upload their personal sensitive
data. In FL, the number of clients could be sufficiently large, but the
bandwidth available for model distribution and re-upload is quite limited,
making it sensible to only involve part of the volunteers to participate in the
training process. The client selection policy is critical to an FL process in
terms of training efficiency, the final model's quality as well as fairness. In
this paper, we will model the fairness guaranteed client selection as a
Lyapunov optimization problem and then a C2MAB-based method is proposed for
estimation of the model exchange time between each client and the server, based
on which we design a fairness guaranteed algorithm termed RBCS-F for
problem-solving. The regret of RBCS-F is strictly bounded by a finite constant,
justifying its theoretical feasibility. Barring the theoretical results, more
empirical data can be derived from our real training experiments on public
datasets.
Related papers
- Federated Learning While Providing Model as a Service: Joint Training
and Inference Optimization [30.305956110710266]
Federated learning is beneficial for enabling the training of models across distributed clients.
Existing work has overlooked the coexistence of model training and inference under clients' limited resources.
This paper focuses on the joint optimization of model training and inference to maximize inference performance at clients.
arXiv Detail & Related papers (2023-12-20T09:27:09Z) - Towards More Suitable Personalization in Federated Learning via
Decentralized Partial Model Training [67.67045085186797]
Almost all existing systems have to face large communication burdens if the central FL server fails.
It personalizes the "right" in the deep models by alternately updating the shared and personal parameters.
To further promote the shared parameters aggregation process, we propose DFed integrating the local Sharpness Miniization.
arXiv Detail & Related papers (2023-05-24T13:52:18Z) - Efficient Personalized Federated Learning via Sparse Model-Adaptation [47.088124462925684]
Federated Learning (FL) aims to train machine learning models for multiple clients without sharing their own private data.
We propose pFedGate for efficient personalized FL by adaptively and efficiently learning sparse local models.
We show that pFedGate achieves superior global accuracy, individual accuracy and efficiency simultaneously over state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T12:21:34Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - SalientGrads: Sparse Models for Communication Efficient and Data Aware
Distributed Federated Training [1.0413504599164103]
Federated learning (FL) enables the training of a model leveraging decentralized data in client sites while preserving privacy by not collecting data.
One of the significant challenges of FL is limited computation and low communication bandwidth in resource limited edge client nodes.
We propose Salient Grads, which simplifies the process of sparse training by choosing a data aware subnetwork before training.
arXiv Detail & Related papers (2023-04-15T06:46:37Z) - FedCliP: Federated Learning with Client Pruning [3.796320380104124]
Federated learning (FL) is a newly emerging distributed learning paradigm.
One fundamental bottleneck in FL is the heavy communication overheads between the distributed clients and the central server.
We propose FedCliP, the first communication efficient FL training framework from a macro perspective.
arXiv Detail & Related papers (2023-01-17T09:15:37Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z) - Stochastic Client Selection for Federated Learning with Volatile Clients [41.591655430723186]
Federated Learning (FL) is a privacy-preserving machine learning paradigm.
In each round of synchronous FL training, only a fraction of available clients are chosen to participate.
We propose E3CS, a client selection scheme to solve the problem.
arXiv Detail & Related papers (2020-11-17T16:35:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.