Ranking-based Client Selection with Imitation Learning for Efficient Federated Learning
- URL: http://arxiv.org/abs/2405.04122v1
- Date: Tue, 7 May 2024 08:44:29 GMT
- Title: Ranking-based Client Selection with Imitation Learning for Efficient Federated Learning
- Authors: Chunlin Tian, Zhan Shi, Xinpeng Qin, Li Li, Chengzhong Xu,
- Abstract summary: Federated Learning (FL) enables multiple devices to collaboratively train a shared model.
The selection of participating devices in each training round critically affects both the model performance and training efficiency.
We introduce a novel device selection solution called FedRank, which is an end-to-end, ranking-based approach.
- Score: 20.412469498888292
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Federated Learning (FL) enables multiple devices to collaboratively train a shared model while ensuring data privacy. The selection of participating devices in each training round critically affects both the model performance and training efficiency, especially given the vast heterogeneity in training capabilities and data distribution across devices. To address these challenges, we introduce a novel device selection solution called FedRank, which is an end-to-end, ranking-based approach that is pre-trained by imitation learning against state-of-the-art analytical approaches. It not only considers data and system heterogeneity at runtime but also adaptively and efficiently chooses the most suitable clients for model training. Specifically, FedRank views client selection in FL as a ranking problem and employs a pairwise training strategy for the smart selection process. Additionally, an imitation learning-based approach is designed to counteract the cold-start issues often seen in state-of-the-art learning-based approaches. Experimental results reveal that \model~ boosts model accuracy by 5.2\% to 56.9\%, accelerates the training convergence up to $2.01 \times$ and saves the energy consumption up to $40.1\%$.
Related papers
- Learn What You Need in Personalized Federated Learning [53.83081622573734]
$textitLearn2pFed$ is a novel algorithm-unrolling-based personalized federated learning framework.
We show that $textitLearn2pFed$ significantly outperforms previous personalized federated learning methods.
arXiv Detail & Related papers (2024-01-16T12:45:15Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Effectively Heterogeneous Federated Learning: A Pairing and Split
Learning Based Approach [16.093068118849246]
This paper presents a novel split federated learning (SFL) framework that pairs clients with different computational resources.
A greedy algorithm is proposed by reconstructing the optimization of training latency as a graph edge selection problem.
Simulation results show the proposed method can significantly improve the FL training speed and achieve high performance.
arXiv Detail & Related papers (2023-08-26T11:10:54Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - WAFFLE: Weighted Averaging for Personalized Federated Learning [38.241216472571786]
We introduce WAFFLE, a personalized collaborative machine learning algorithm based on SCAFFOLD.
WAFFLE uses the Euclidean distance between clients' updates to weigh their individual contributions.
Our experiments demonstrate the effectiveness of WAFFLE compared with other methods.
arXiv Detail & Related papers (2021-10-13T18:40:54Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - Stochastic Client Selection for Federated Learning with Volatile Clients [41.591655430723186]
Federated Learning (FL) is a privacy-preserving machine learning paradigm.
In each round of synchronous FL training, only a fraction of available clients are chosen to participate.
We propose E3CS, a client selection scheme to solve the problem.
arXiv Detail & Related papers (2020-11-17T16:35:24Z) - CatFedAvg: Optimising Communication-efficiency and Classification
Accuracy in Federated Learning [2.2172881631608456]
We introduce a new family of Federated Learning algorithms called CatFedAvg.
It improves the communication efficiency but improves the quality of learning using a category coverage inNIST strategy.
Our experiments show that an increase of 10% absolute points accuracy using the M dataset with 70% absolute points lower network transfer over FedAvg.
arXiv Detail & Related papers (2020-11-14T06:52:02Z) - Oort: Efficient Federated Learning via Guided Participant Selection [5.01181273401802]
Federated Learning (FL) enables in-situ model training and testing on edge data.
Existing efforts randomly select FL participants, which leads to poor model and system efficiency.
Oort improves time-to-accuracy performance by 1.2x-14.1x and final model accuracy by 1.3%-9.8%.
arXiv Detail & Related papers (2020-10-12T23:48:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.