FLEdge: Benchmarking Federated Machine Learning Applications in Edge
Computing Systems
- URL: http://arxiv.org/abs/2306.05172v2
- Date: Tue, 13 Jun 2023 13:41:43 GMT
- Title: FLEdge: Benchmarking Federated Machine Learning Applications in Edge
Computing Systems
- Authors: Herbert Woisetschl\"ager, Alexander Isenko, Ruben Mayer, Hans-Arno
Jacobsen
- Abstract summary: We introduce FLEdge, a benchmark targeting FL workloads in edge computing systems.
We systematically study hardware heterogeneity, energy efficiency during training, and the effect of various differential privacy levels on training in FL systems.
We evaluate the impact of client dropouts on state-of-the-art FL strategies with failure rates as high as 50%.
- Score: 77.45213180689952
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Federated Machine Learning (FL) has received considerable attention in recent
years. FL benchmarks are predominantly explored in either simulated systems or
data center environments, neglecting the setups of real-world systems, which
are often closely linked to edge computing. We close this research gap by
introducing FLEdge, a benchmark targeting FL workloads in edge computing
systems. We systematically study hardware heterogeneity, energy efficiency
during training, and the effect of various differential privacy levels on
training in FL systems. To make this benchmark applicable to real-world
scenarios, we evaluate the impact of client dropouts on state-of-the-art FL
strategies with failure rates as high as 50%. FLEdge provides new insights,
such as that training state-of-the-art FL workloads on older GPU-accelerated
embedded devices is up to 3x more energy efficient than on modern server-grade
GPUs.
Related papers
- Federated Fine-Tuning of LLMs on the Very Edge: The Good, the Bad, the Ugly [62.473245910234304]
This paper takes a hardware-centric approach to explore how Large Language Models can be brought to modern edge computing systems.
We provide a micro-level hardware benchmark, compare the model FLOP utilization to a state-of-the-art data center GPU, and study the network utilization in realistic conditions.
arXiv Detail & Related papers (2023-10-04T20:27:20Z) - FS-Real: Towards Real-World Cross-Device Federated Learning [60.91678132132229]
Federated Learning (FL) aims to train high-quality models in collaboration with distributed clients while not uploading their local data.
There is still a considerable gap between the flourishing FL research and real-world scenarios, mainly caused by the characteristics of heterogeneous devices and its scales.
We propose an efficient and scalable prototyping system for real-world cross-device FL, FS-Real.
arXiv Detail & Related papers (2023-03-23T15:37:17Z) - Automated Federated Learning in Mobile Edge Networks -- Fast Adaptation
and Convergence [83.58839320635956]
Federated Learning (FL) can be used in mobile edge networks to train machine learning models in a distributed manner.
Recent FL has been interpreted within a Model-Agnostic Meta-Learning (MAML) framework, which brings FL significant advantages in fast adaptation and convergence over heterogeneous datasets.
This paper addresses how much benefit MAML brings to FL and how to maximize such benefit over mobile edge networks.
arXiv Detail & Related papers (2023-03-23T02:42:10Z) - Online Data Selection for Federated Learning with Limited Storage [53.46789303416799]
Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices.
The impact of on-device storage on the performance of FL is still not explored.
In this work, we take the first step to consider the online data selection for FL with limited on-device storage.
arXiv Detail & Related papers (2022-09-01T03:27:33Z) - Exploring Deep Reinforcement Learning-Assisted Federated Learning for
Online Resource Allocation in EdgeIoT [53.68792408315411]
Federated learning (FL) has been increasingly considered to preserve data training privacy from eavesdropping attacks in mobile edge computing-based Internet of Thing (EdgeIoT)
We propose a new federated learning-enabled twin-delayed deep deterministic policy gradient (FLDLT3) framework to achieve the optimal accuracy and energy balance in a continuous domain.
Numerical results demonstrate that the proposed FL-DLT3 achieves fast convergence (less than 100 iterations) while the FL accuracy-to-energy consumption ratio is improved by 51.8% compared to existing state-of-the-art benchmark.
arXiv Detail & Related papers (2022-02-15T13:36:15Z) - On-the-fly Resource-Aware Model Aggregation for Federated Learning in
Heterogeneous Edge [15.932747809197517]
Edge computing has revolutionized the world of mobile and wireless networks world thanks to its flexible, secure, and performing characteristics.
In this paper, we conduct an in-depth study of strategies to replace a central aggregation server with a flying master.
Our results demonstrate a significant reduction of runtime using our flying master FL framework compared to the original FL from measurements results conducted in our EdgeAI testbed and over real 5G networks.
arXiv Detail & Related papers (2021-12-21T19:04:42Z) - On-device Federated Learning with Flower [22.719117235237036]
Federated Learning (FL) allows edge devices to collaboratively learn a shared prediction model while keeping their training data on the device.
Despite the algorithmic advancements in FL, the support for on-device training of FL algorithms on edge devices remains poor.
We present an exploration of on-device FL on various smartphones and embedded devices using the Flower framework.
arXiv Detail & Related papers (2021-04-07T10:42:14Z) - Evaluation and Optimization of Distributed Machine Learning Techniques
for Internet of Things [34.544836653715244]
Federated learning (FL) and split learning (SL) are state-of-the-art distributed machine learning techniques.
Recent FL and SL are combined to form splitfed learning (SFL) to leverage each of their benefits.
This work considers FL, SL, and SFL, and mount them on Raspberry Pi devices to evaluate their performance.
arXiv Detail & Related papers (2021-03-03T23:55:37Z) - Flower: A Friendly Federated Learning Research Framework [18.54638343801354]
Federated Learning (FL) has emerged as a promising technique for edge devices to collaboratively learn a shared prediction model.
We present Flower -- a comprehensive FL framework that distinguishes itself from existing platforms by offering new facilities to execute large-scale FL experiments.
arXiv Detail & Related papers (2020-07-28T17:59:07Z) - FLeet: Online Federated Learning via Staleness Awareness and Performance
Prediction [9.408271687085476]
This paper presents FLeet, the first Online Federated Learning system.
Online FL combines the privacy of Standard FL with the precision of online learning.
I-Prof is a new lightweight profiler that predicts and controls the impact of learning tasks on mobile devices.
AdaSGD is a new adaptive learning algorithm that is resilient to delayed updates.
arXiv Detail & Related papers (2020-06-12T15:43:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.