Ed-Fed: A generic federated learning framework with resource-aware
client selection for edge devices
- URL: http://arxiv.org/abs/2307.07199v1
- Date: Fri, 14 Jul 2023 07:19:20 GMT
- Title: Ed-Fed: A generic federated learning framework with resource-aware
client selection for edge devices
- Authors: Zitha Sasindran, Harsha Yelchuri, T. V. Prabhakar
- Abstract summary: Federated learning (FL) has evolved as a prominent method for edge devices to cooperatively create a unified prediction model.
Despite numerous research frameworks for simulating FL algorithms, they do not facilitate comprehensive deployment for automatic speech recognition tasks.
This is where Ed-Fed, a comprehensive and generic FL framework, comes in as a foundation for future practical FL system research.
- Score: 0.6875312133832078
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) has evolved as a prominent method for edge devices to
cooperatively create a unified prediction model while securing their sensitive
training data local to the device. Despite the existence of numerous research
frameworks for simulating FL algorithms, they do not facilitate comprehensive
deployment for automatic speech recognition tasks on heterogeneous edge
devices. This is where Ed-Fed, a comprehensive and generic FL framework, comes
in as a foundation for future practical FL system research. We also propose a
novel resource-aware client selection algorithm to optimise the waiting time in
the FL settings. We show that our approach can handle the straggler devices and
dynamically set the training time for the selected devices in a round. Our
evaluation has shown that the proposed approach significantly optimises waiting
time in FL compared to conventional random client selection methods.
Related papers
- Efficient Data Distribution Estimation for Accelerated Federated Learning [5.085889377571319]
Federated Learning(FL) is a privacy-preserving machine learning paradigm where a global model is trained in-situ across a large number of distributed edge devices.
Devices are highly heterogeneous in both their system resources and training data.
Various client selection algorithms have been developed, showing promising performance improvement in terms of model coverage and accuracy.
arXiv Detail & Related papers (2024-06-03T20:33:17Z) - FLEdge: Benchmarking Federated Machine Learning Applications in Edge Computing Systems [61.335229621081346]
Federated Learning (FL) has become a viable technique for realizing privacy-enhancing distributed deep learning on the network edge.
In this paper, we propose FLEdge, which complements existing FL benchmarks by enabling a systematic evaluation of client capabilities.
arXiv Detail & Related papers (2023-06-08T13:11:20Z) - Joint Age-based Client Selection and Resource Allocation for
Communication-Efficient Federated Learning over NOMA Networks [8.030674576024952]
In federated learning (FL), distributed clients can collaboratively train a shared global model while retaining their own training data locally.
In this paper, a joint optimization problem of client selection and resource allocation is formulated, aiming to minimize the total time consumption of each round in FL over a non-orthogonal multiple access (NOMA) enabled wireless network.
In addition, a server-side artificial neural network (ANN) is proposed to predict the FL models of clients who are not selected at each round to further improve FL performance.
arXiv Detail & Related papers (2023-04-18T13:58:16Z) - Automated Federated Learning in Mobile Edge Networks -- Fast Adaptation
and Convergence [83.58839320635956]
Federated Learning (FL) can be used in mobile edge networks to train machine learning models in a distributed manner.
Recent FL has been interpreted within a Model-Agnostic Meta-Learning (MAML) framework, which brings FL significant advantages in fast adaptation and convergence over heterogeneous datasets.
This paper addresses how much benefit MAML brings to FL and how to maximize such benefit over mobile edge networks.
arXiv Detail & Related papers (2023-03-23T02:42:10Z) - Client Selection for Generalization in Accelerated Federated Learning: A
Multi-Armed Bandit Approach [20.300740276237523]
Federated learning (FL) is an emerging machine learning (ML) paradigm used to train models across multiple nodes (i.e., clients) holding local data sets.
We develop a novel algorithm to achieve this goal, dubbed Bandit Scheduling for FL (BSFL)
arXiv Detail & Related papers (2023-03-18T09:45:58Z) - Online Data Selection for Federated Learning with Limited Storage [53.46789303416799]
Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices.
The impact of on-device storage on the performance of FL is still not explored.
In this work, we take the first step to consider the online data selection for FL with limited on-device storage.
arXiv Detail & Related papers (2022-09-01T03:27:33Z) - A Survey on Participant Selection for Federated Learning in Mobile
Networks [47.88372677863646]
Federated Learning (FL) is an efficient distributed machine learning paradigm that employs private datasets in a privacy-preserving manner.
Due to limited communication bandwidth and unstable availability of such devices in a mobile network, only a fraction of end devices can be selected in each round.
arXiv Detail & Related papers (2022-07-08T04:22:48Z) - On-the-fly Resource-Aware Model Aggregation for Federated Learning in
Heterogeneous Edge [15.932747809197517]
Edge computing has revolutionized the world of mobile and wireless networks world thanks to its flexible, secure, and performing characteristics.
In this paper, we conduct an in-depth study of strategies to replace a central aggregation server with a flying master.
Our results demonstrate a significant reduction of runtime using our flying master FL framework compared to the original FL from measurements results conducted in our EdgeAI testbed and over real 5G networks.
arXiv Detail & Related papers (2021-12-21T19:04:42Z) - On-device Federated Learning with Flower [22.719117235237036]
Federated Learning (FL) allows edge devices to collaboratively learn a shared prediction model while keeping their training data on the device.
Despite the algorithmic advancements in FL, the support for on-device training of FL algorithms on edge devices remains poor.
We present an exploration of on-device FL on various smartphones and embedded devices using the Flower framework.
arXiv Detail & Related papers (2021-04-07T10:42:14Z) - Fast-Convergent Federated Learning [82.32029953209542]
Federated learning is a promising solution for distributing machine learning tasks through modern networks of mobile devices.
We propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training.
arXiv Detail & Related papers (2020-07-26T14:37:51Z) - Prophet: Proactive Candidate-Selection for Federated Learning by
Predicting the Qualities of Training and Reporting Phases [66.01459702625064]
In 5G networks, the training latency is still an obstacle preventing Federated Learning (FL) from being largely adopted.
One of the most fundamental problems that lead to large latency is the bad candidate-selection for FL.
In this paper, we study the proactive candidate-selection for FL in this paper.
arXiv Detail & Related papers (2020-02-03T06:40:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.