FLeet: Online Federated Learning via Staleness Awareness and Performance
Prediction
- URL: http://arxiv.org/abs/2006.07273v2
- Date: Thu, 3 Dec 2020 11:19:50 GMT
- Title: FLeet: Online Federated Learning via Staleness Awareness and Performance
Prediction
- Authors: Georgios Damaskinos, Rachid Guerraoui, Anne-Marie Kermarrec, Vlad
Nitu, Rhicheek Patra, Francois Taiani
- Abstract summary: This paper presents FLeet, the first Online Federated Learning system.
Online FL combines the privacy of Standard FL with the precision of online learning.
I-Prof is a new lightweight profiler that predicts and controls the impact of learning tasks on mobile devices.
AdaSGD is a new adaptive learning algorithm that is resilient to delayed updates.
- Score: 9.408271687085476
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is very appealing for its privacy benefits:
essentially, a global model is trained with updates computed on mobile devices
while keeping the data of users local. Standard FL infrastructures are however
designed to have no energy or performance impact on mobile devices, and are
therefore not suitable for applications that require frequent (online) model
updates, such as news recommenders.
This paper presents FLeet, the first Online FL system, acting as a middleware
between the Android OS and the machine learning application. FLeet combines the
privacy of Standard FL with the precision of online learning thanks to two core
components: (i) I-Prof, a new lightweight profiler that predicts and controls
the impact of learning tasks on mobile devices, and (ii) AdaSGD, a new adaptive
learning algorithm that is resilient to delayed updates.
Our extensive evaluation shows that Online FL, as implemented by FLeet, can
deliver a 2.3x quality boost compared to Standard FL, while only consuming
0.036% of the battery per day. I-Prof can accurately control the impact of
learning tasks by improving the prediction accuracy up to 3.6x (computation
time) and up to 19x (energy). AdaSGD outperforms alternative FL approaches by
18.4% in terms of convergence speed on heterogeneous data.
Related papers
- Can We Theoretically Quantify the Impacts of Local Updates on the Generalization Performance of Federated Learning? [50.03434441234569]
Federated Learning (FL) has gained significant popularity due to its effectiveness in training machine learning models across diverse sites without requiring direct data sharing.
While various algorithms have shown that FL with local updates is a communication-efficient distributed learning framework, the generalization performance of FL with local updates has received comparatively less attention.
arXiv Detail & Related papers (2024-09-05T19:00:18Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Automated Federated Learning in Mobile Edge Networks -- Fast Adaptation
and Convergence [83.58839320635956]
Federated Learning (FL) can be used in mobile edge networks to train machine learning models in a distributed manner.
Recent FL has been interpreted within a Model-Agnostic Meta-Learning (MAML) framework, which brings FL significant advantages in fast adaptation and convergence over heterogeneous datasets.
This paper addresses how much benefit MAML brings to FL and how to maximize such benefit over mobile edge networks.
arXiv Detail & Related papers (2023-03-23T02:42:10Z) - Online Data Selection for Federated Learning with Limited Storage [53.46789303416799]
Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices.
The impact of on-device storage on the performance of FL is still not explored.
In this work, we take the first step to consider the online data selection for FL with limited on-device storage.
arXiv Detail & Related papers (2022-09-01T03:27:33Z) - FLAME: Federated Learning Across Multi-device Environments [9.810211000961647]
Federated Learning (FL) enables distributed training of machine learning models while keeping personal data on user devices private.
We propose FLAME, a user-centered FL training approach to counter statistical and system heterogeneity in multi-device environments.
Our experiment results show that FLAME outperforms various baselines by 4.8-33.8% higher F-1 score, 1.02-2.86x greater energy efficiency, and up to 2.02x speedup in convergence.
arXiv Detail & Related papers (2022-02-17T22:23:56Z) - Exploring Deep Reinforcement Learning-Assisted Federated Learning for
Online Resource Allocation in EdgeIoT [53.68792408315411]
Federated learning (FL) has been increasingly considered to preserve data training privacy from eavesdropping attacks in mobile edge computing-based Internet of Thing (EdgeIoT)
We propose a new federated learning-enabled twin-delayed deep deterministic policy gradient (FLDLT3) framework to achieve the optimal accuracy and energy balance in a continuous domain.
Numerical results demonstrate that the proposed FL-DLT3 achieves fast convergence (less than 100 iterations) while the FL accuracy-to-energy consumption ratio is improved by 51.8% compared to existing state-of-the-art benchmark.
arXiv Detail & Related papers (2022-02-15T13:36:15Z) - On-the-fly Resource-Aware Model Aggregation for Federated Learning in
Heterogeneous Edge [15.932747809197517]
Edge computing has revolutionized the world of mobile and wireless networks world thanks to its flexible, secure, and performing characteristics.
In this paper, we conduct an in-depth study of strategies to replace a central aggregation server with a flying master.
Our results demonstrate a significant reduction of runtime using our flying master FL framework compared to the original FL from measurements results conducted in our EdgeAI testbed and over real 5G networks.
arXiv Detail & Related papers (2021-12-21T19:04:42Z) - Mobility-Aware Cluster Federated Learning in Hierarchical Wireless
Networks [81.83990083088345]
We develop a theoretical model to characterize the hierarchical federated learning (HFL) algorithm in wireless networks.
Our analysis proves that the learning performance of HFL deteriorates drastically with highly-mobile users.
To circumvent these issues, we propose a mobility-aware cluster federated learning (MACFL) algorithm.
arXiv Detail & Related papers (2021-08-20T10:46:58Z) - FedFog: Network-Aware Optimization of Federated Learning over Wireless
Fog-Cloud Systems [40.421253127588244]
Federated learning (FL) is capable of performing large distributed machine learning tasks across multiple edge users by periodically aggregating trained local parameters.
We first propose an efficient FL algorithm (called FedFog) to perform the local aggregation of gradient parameters at fog servers and global training update at the cloud.
arXiv Detail & Related papers (2021-07-04T08:03:15Z) - FLaPS: Federated Learning and Privately Scaling [3.618133010429131]
Federated learning (FL) is a distributed learning process where the model is transferred to the devices that posses data.
We present Federated Learning and Privately Scaling (FLaPS) architecture, which improves scalability as well as the security and privacy of the system.
arXiv Detail & Related papers (2020-09-13T14:20:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.