FedAdapt: Adaptive Offloading for IoT Devices in Federated Learning
- URL: http://arxiv.org/abs/2107.04271v1
- Date: Fri, 9 Jul 2021 07:29:55 GMT
- Title: FedAdapt: Adaptive Offloading for IoT Devices in Federated Learning
- Authors: Di Wu and Rehmat Ullah and Paul Harvey and Peter Kilpatrick and Ivor
Spence and Blesson Varghese
- Abstract summary: Federated Learning (FL) on Internet-of-Things devices is necessitated by the large volumes of data they produce and growing concerns of data privacy.
This paper presents FedAdapt, an adaptive offloading FL framework to mitigate the aforementioned challenges.
- Score: 2.5775113252104216
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Applying Federated Learning (FL) on Internet-of-Things devices is
necessitated by the large volumes of data they produce and growing concerns of
data privacy. However, there are three challenges that need to be addressed to
make FL efficient: (i) execute on devices with limited computational
capabilities, (ii) account for stragglers due to computational heterogeneity of
devices, and (iii) adapt to the changing network bandwidths. This paper
presents FedAdapt, an adaptive offloading FL framework to mitigate the
aforementioned challenges. FedAdapt accelerates local training in
computationally constrained devices by leveraging layer offloading of deep
neural networks (DNNs) to servers. Further, FedAdapt adopts reinforcement
learning-based optimization and clustering to adaptively identify which layers
of the DNN should be offloaded for each individual device on to a server to
tackle the challenges of computational heterogeneity and changing network
bandwidth. Experimental studies are carried out on a lab-based testbed
comprising five IoT devices. By offloading a DNN from the device to the server
FedAdapt reduces the training time of a typical IoT device by over half
compared to classic FL. The training time of extreme stragglers and the overall
training time can be reduced by up to 57%. Furthermore, with changing network
bandwidth, FedAdapt is demonstrated to reduce the training time by up to 40%
when compared to classic FL, without sacrificing accuracy. FedAdapt can be
downloaded from https://github.com/qub-blesson/FedAdapt.
Related papers
- Ampere: Communication-Efficient and High-Accuracy Split Federated Learning [19.564340315424413]
A Federated Learning (FL) system collaboratively trains neural networks across devices and a server but is limited by significant on-device computation costs.<n>We propose Ampere, a novel collaborative training system that simultaneously minimizes on-device computation and device-server communication.<n>A lightweight auxiliary network generation method decouples training between the device and server, reducing frequent intermediate exchanges to a single transfer.
arXiv Detail & Related papers (2025-07-08T20:54:43Z) - Efficient Asynchronous Federated Learning with Sparsification and
Quantization [55.6801207905772]
Federated Learning (FL) is attracting more and more attention to collaboratively train a machine learning model without transferring raw data.
FL generally exploits a parameter server and a large number of edge devices during the whole process of the model training.
We propose TEASQ-Fed to exploit edge devices to asynchronously participate in the training process by actively applying for tasks.
arXiv Detail & Related papers (2023-12-23T07:47:07Z) - Adaptive Federated Pruning in Hierarchical Wireless Networks [69.6417645730093]
Federated Learning (FL) is a privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets.
In this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale.
We show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50 percent communication cost.
arXiv Detail & Related papers (2023-05-15T22:04:49Z) - Online Data Selection for Federated Learning with Limited Storage [53.46789303416799]
Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices.
The impact of on-device storage on the performance of FL is still not explored.
In this work, we take the first step to consider the online data selection for FL with limited on-device storage.
arXiv Detail & Related papers (2022-09-01T03:27:33Z) - Adaptive Target-Condition Neural Network: DNN-Aided Load Balancing for
Hybrid LiFi and WiFi Networks [19.483289519348315]
Machine learning has the potential to provide a complexity-friendly load balancing solution.
The state-of-the-art (SOTA) learning-aided LB methods need retraining when the network environment changes.
A novel deep neural network (DNN) structure named adaptive target-condition neural network (A-TCNN) is proposed.
arXiv Detail & Related papers (2022-08-09T20:46:13Z) - FedAdapter: Efficient Federated Learning for Modern NLP [2.6706511009396023]
Fine-tuning pre-trained models for downstream tasks often requires private data.
FedNLP is prohibitively slow due to the large model sizes and the resultant high network/computation cost.
We propose FedAdapter, a framework that enhances the existing FedNLP with two novel designs.
arXiv Detail & Related papers (2022-05-20T13:10:43Z) - CoCoFL: Communication- and Computation-Aware Federated Learning via
Partial NN Freezing and Quantization [3.219812767529503]
We present a novel FL technique, CoCoFL, which maintains the full NN structure on all devices.
CoCoFL efficiently utilizes the available resources on devices and allows constrained devices to make a significant contribution to the FL system.
arXiv Detail & Related papers (2022-03-10T16:45:05Z) - Federated Dynamic Sparse Training: Computing Less, Communicating Less,
Yet Learning Better [88.28293442298015]
Federated learning (FL) enables distribution of machine learning workloads from the cloud to resource-limited edge devices.
We develop, implement, and experimentally validate a novel FL framework termed Federated Dynamic Sparse Training (FedDST)
FedDST is a dynamic process that extracts and trains sparse sub-networks from the target full network.
arXiv Detail & Related papers (2021-12-18T02:26:38Z) - On the Tradeoff between Energy, Precision, and Accuracy in Federated
Quantized Neural Networks [68.52621234990728]
Federated learning (FL) over wireless networks requires balancing between accuracy, energy efficiency, and precision.
We propose a quantized FL framework that represents data with a finite level of precision in both local training and uplink transmission.
Our framework can reduce energy consumption by up to 53% compared to a standard FL model.
arXiv Detail & Related papers (2021-11-15T17:00:03Z) - FedFly: Towards Migration in Edge-based Distributed Federated Learning [2.5775113252104216]
Federated learning (FL) is a privacy-preserving distributed machine learning technique.
FedFly is the first work to migrate a deep neural network (DNN) when devices move between edge servers during FL training.
arXiv Detail & Related papers (2021-11-02T11:44:41Z) - To Talk or to Work: Energy Efficient Federated Learning over Mobile
Devices via the Weight Quantization and 5G Transmission Co-Design [49.95746344960136]
Federated learning (FL) is a new paradigm for large-scale learning tasks across mobile devices.
It is not clear how to establish an effective wireless network architecture to support FL over mobile devices.
We develop a wireless transmission and weight quantization co-design for energy efficient FL over heterogeneous 5G mobile devices.
arXiv Detail & Related papers (2020-12-21T01:13:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.