Prophet: Proactive Candidate-Selection for Federated Learning by
Predicting the Qualities of Training and Reporting Phases
- URL: http://arxiv.org/abs/2002.00577v2
- Date: Tue, 19 May 2020 01:55:13 GMT
- Title: Prophet: Proactive Candidate-Selection for Federated Learning by
Predicting the Qualities of Training and Reporting Phases
- Authors: Huawei Huang, Kangying Lin, Song Guo, Pan Zhou, Zibin Zheng
- Abstract summary: In 5G networks, the training latency is still an obstacle preventing Federated Learning (FL) from being largely adopted.
One of the most fundamental problems that lead to large latency is the bad candidate-selection for FL.
In this paper, we study the proactive candidate-selection for FL in this paper.
- Score: 66.01459702625064
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although the challenge of the device connection is much relieved in 5G
networks, the training latency is still an obstacle preventing Federated
Learning (FL) from being largely adopted. One of the most fundamental problems
that lead to large latency is the bad candidate-selection for FL. In the
dynamic environment, the mobile devices selected by the existing reactive
candidate-selection algorithms very possibly fail to complete the training and
reporting phases of FL, because the FL parameter server only knows the
currently-observed resources of all candidates. To this end, we study the
proactive candidate-selection for FL in this paper. We first let each candidate
device predict the qualities of both its training and reporting phases locally
using LSTM. Then, the proposed candidateselection algorithm is implemented by
the Deep Reinforcement Learning (DRL) framework. Finally, the real-world
trace-driven experiments prove that the proposed approach outperforms the
existing reactive algorithms
Related papers
- A Survey of Federated Unlearning: A Taxonomy, Challenges and Future
Directions [71.16718184611673]
The evolution of privacy-preserving Federated Learning (FL) has led to an increasing demand for implementing the right to be forgotten.
The implementation of selective forgetting is particularly challenging in FL due to its decentralized nature.
Federated Unlearning (FU) emerges as a strategic solution to address the increasing need for data privacy.
arXiv Detail & Related papers (2023-10-30T01:34:33Z) - Ed-Fed: A generic federated learning framework with resource-aware
client selection for edge devices [0.6875312133832078]
Federated learning (FL) has evolved as a prominent method for edge devices to cooperatively create a unified prediction model.
Despite numerous research frameworks for simulating FL algorithms, they do not facilitate comprehensive deployment for automatic speech recognition tasks.
This is where Ed-Fed, a comprehensive and generic FL framework, comes in as a foundation for future practical FL system research.
arXiv Detail & Related papers (2023-07-14T07:19:20Z) - DPP-based Client Selection for Federated Learning with Non-IID Data [97.1195165400568]
This paper proposes a client selection (CS) method to tackle the communication bottleneck of federated learning (FL)
We first analyze the effect of CS in FL and show that FL training can be accelerated by adequately choosing participants to diversify the training dataset in each round of training.
We leverage data profiling and determinantal point process (DPP) sampling techniques to develop an algorithm termed Federated Learning with DPP-based Participant Selection (FL-DP$3$S)
arXiv Detail & Related papers (2023-03-30T13:14:54Z) - A Survey on Participant Selection for Federated Learning in Mobile
Networks [47.88372677863646]
Federated Learning (FL) is an efficient distributed machine learning paradigm that employs private datasets in a privacy-preserving manner.
Due to limited communication bandwidth and unstable availability of such devices in a mobile network, only a fraction of end devices can be selected in each round.
arXiv Detail & Related papers (2022-07-08T04:22:48Z) - On-the-fly Resource-Aware Model Aggregation for Federated Learning in
Heterogeneous Edge [15.932747809197517]
Edge computing has revolutionized the world of mobile and wireless networks world thanks to its flexible, secure, and performing characteristics.
In this paper, we conduct an in-depth study of strategies to replace a central aggregation server with a flying master.
Our results demonstrate a significant reduction of runtime using our flying master FL framework compared to the original FL from measurements results conducted in our EdgeAI testbed and over real 5G networks.
arXiv Detail & Related papers (2021-12-21T19:04:42Z) - Critical Learning Periods in Federated Learning [11.138980572551066]
Federated learning (FL) is a popular technique to train machine learning (ML) models with decentralized data.
We show that the final test accuracy of FL is dramatically affected by the early phase of the training process.
arXiv Detail & Related papers (2021-09-12T21:06:07Z) - Federated Robustness Propagation: Sharing Adversarial Robustness in
Federated Learning [98.05061014090913]
Federated learning (FL) emerges as a popular distributed learning schema that learns from a set of participating users without requiring raw data to be shared.
adversarial training (AT) provides a sound solution for centralized learning, extending its usage for FL users has imposed significant challenges.
We show that existing FL techniques cannot effectively propagate adversarial robustness among non-iid users.
We propose a simple yet effective propagation approach that transfers robustness through carefully designed batch-normalization statistics.
arXiv Detail & Related papers (2021-06-18T15:52:33Z) - Budgeted Online Selection of Candidate IoT Clients to Participate in
Federated Learning [33.742677763076]
Federated Learning (FL) is an architecture in which model parameters are exchanged instead of client data.
FL trains a global model by communicating with clients over communication rounds.
We propose an online stateful FL to find the best candidate clients and an IoT client alarm application.
arXiv Detail & Related papers (2020-11-16T06:32:31Z) - Convergence Time Optimization for Federated Learning over Wireless
Networks [160.82696473996566]
A wireless network is considered in which wireless users transmit their local FL models (trained using their locally collected data) to a base station (BS)
The BS, acting as a central controller, generates a global FL model using the received local FL models and broadcasts it back to all users.
Due to the limited number of resource blocks (RBs) in a wireless network, only a subset of users can be selected to transmit their local FL model parameters to the BS.
Since each user has unique training data samples, the BS prefers to include all local user FL models to generate a converged global FL model.
arXiv Detail & Related papers (2020-01-22T01:55:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.