FedFly: Towards Migration in Edge-based Distributed Federated Learning
- URL: http://arxiv.org/abs/2111.01516v1
- Date: Tue, 2 Nov 2021 11:44:41 GMT
- Title: FedFly: Towards Migration in Edge-based Distributed Federated Learning
- Authors: Rehmat Ullah, Di Wu, Paul Harvey, Peter Kilpatrick, Ivor Spence,
Blesson Varghese
- Abstract summary: Federated learning (FL) is a privacy-preserving distributed machine learning technique.
FedFly is the first work to migrate a deep neural network (DNN) when devices move between edge servers during FL training.
- Score: 2.5775113252104216
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) is a privacy-preserving distributed machine learning
technique that trains models without having direct access to the original data
generated on devices. Since devices may be resource constrained, offloading can
be used to improve FL performance by transferring computational workload from
devices to edge servers. However, due to mobility, devices participating in FL
may leave the network during training and need to connect to a different edge
server. This is challenging because the offloaded computations from edge server
need to be migrated. In line with this assertion, we present FedFly, which is,
to the best of our knowledge, the first work to migrate a deep neural network
(DNN) when devices move between edge servers during FL training. Our empirical
results on the CIFAR-10 dataset, with both balanced and imbalanced data
distribution support our claims that FedFly can reduce training time by up to
33% when a device moves after 50% of the training is completed, and by up to
45% when 90% of the training is completed when compared to state-of-the-art
offloading approach in FL. FedFly has negligible overhead of 2 seconds and does
not compromise accuracy. Finally, we highlight a number of open research issues
for further investigation. FedFly can be downloaded from
https://github.com/qub-blesson/FedFly
Related papers
- Federated Learning with Workload Reduction through Partial Training of Client Models and Entropy-Based Data Selection [3.9981390090442694]
We propose FedFT-EDS, a novel approach that combines Fine-Tuning of partial client models with Entropy-based Data Selection to reduce training workloads on edge devices.
Our experiments show that FedFT-EDS uses only 50% user data while improving the global model performance compared to baseline methods, FedAvg and FedProx.
FedFT-EDS improves client learning efficiency by up to 3 times, using one third of training time on clients to achieve an equivalent performance to the baselines.
arXiv Detail & Related papers (2024-12-30T22:47:32Z) - Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - Efficient Asynchronous Federated Learning with Sparsification and
Quantization [55.6801207905772]
Federated Learning (FL) is attracting more and more attention to collaboratively train a machine learning model without transferring raw data.
FL generally exploits a parameter server and a large number of edge devices during the whole process of the model training.
We propose TEASQ-Fed to exploit edge devices to asynchronously participate in the training process by actively applying for tasks.
arXiv Detail & Related papers (2023-12-23T07:47:07Z) - FLEdge: Benchmarking Federated Machine Learning Applications in Edge Computing Systems [61.335229621081346]
Federated Learning (FL) has become a viable technique for realizing privacy-enhancing distributed deep learning on the network edge.
In this paper, we propose FLEdge, which complements existing FL benchmarks by enabling a systematic evaluation of client capabilities.
arXiv Detail & Related papers (2023-06-08T13:11:20Z) - Enhancing Efficiency in Multidevice Federated Learning through Data Selection [11.67484476827617]
Federated learning (FL) in multidevice environments creates new opportunities to learn from a vast and diverse amount of private data.
In this paper, we develop an FL framework to incorporate on-device data selection on such constrained devices.
We show that our framework achieves 19% higher accuracy and 58% lower latency; compared to the baseline FL without our implemented strategies.
arXiv Detail & Related papers (2022-11-08T11:39:17Z) - Online Data Selection for Federated Learning with Limited Storage [53.46789303416799]
Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices.
The impact of on-device storage on the performance of FL is still not explored.
In this work, we take the first step to consider the online data selection for FL with limited on-device storage.
arXiv Detail & Related papers (2022-09-01T03:27:33Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Federated Dynamic Sparse Training: Computing Less, Communicating Less,
Yet Learning Better [88.28293442298015]
Federated learning (FL) enables distribution of machine learning workloads from the cloud to resource-limited edge devices.
We develop, implement, and experimentally validate a novel FL framework termed Federated Dynamic Sparse Training (FedDST)
FedDST is a dynamic process that extracts and trains sparse sub-networks from the target full network.
arXiv Detail & Related papers (2021-12-18T02:26:38Z) - FedAdapt: Adaptive Offloading for IoT Devices in Federated Learning [2.5775113252104216]
Federated Learning (FL) on Internet-of-Things devices is necessitated by the large volumes of data they produce and growing concerns of data privacy.
This paper presents FedAdapt, an adaptive offloading FL framework to mitigate the aforementioned challenges.
arXiv Detail & Related papers (2021-07-09T07:29:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.