FLEdge: Benchmarking Federated Machine Learning Applications in Edge
Computing Systems
- URL: http://arxiv.org/abs/2306.05172v2
- Date: Tue, 13 Jun 2023 13:41:43 GMT
- Title: FLEdge: Benchmarking Federated Machine Learning Applications in Edge
Computing Systems
- Authors: Herbert Woisetschl\"ager, Alexander Isenko, Ruben Mayer, Hans-Arno
Jacobsen
- Abstract summary: We introduce FLEdge, a benchmark targeting FL workloads in edge computing systems.
We systematically study hardware heterogeneity, energy efficiency during training, and the effect of various differential privacy levels on training in FL systems.
We evaluate the impact of client dropouts on state-of-the-art FL strategies with failure rates as high as 50%.
- Score: 77.45213180689952
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Federated Machine Learning (FL) has received considerable attention in recent
years. FL benchmarks are predominantly explored in either simulated systems or
data center environments, neglecting the setups of real-world systems, which
are often closely linked to edge computing. We close this research gap by
introducing FLEdge, a benchmark targeting FL workloads in edge computing
systems. We systematically study hardware heterogeneity, energy efficiency
during training, and the effect of various differential privacy levels on
training in FL systems. To make this benchmark applicable to real-world
scenarios, we evaluate the impact of client dropouts on state-of-the-art FL
strategies with failure rates as high as 50%. FLEdge provides new insights,
such as that training state-of-the-art FL workloads on older GPU-accelerated
embedded devices is up to 3x more energy efficient than on modern server-grade
GPUs.
Related papers
- Federated Fine-Tuning of LLMs on the Very Edge: The Good, the Bad, the Ugly [62.473245910234304]
This paper takes a hardware-centric approach to explore how Large Language Models can be brought to modern edge computing systems.
We provide a micro-level hardware benchmark, compare the model FLOP utilization to a state-of-the-art data center GPU, and study the network utilization in realistic conditions.
arXiv Detail & Related papers (2023-10-04T20:27:20Z) - FedLE: Federated Learning Client Selection with Lifespan Extension for
Edge IoT Networks [34.63384007690422]
Federated learning (FL) is a distributed and privacy-preserving learning framework for predictive modeling with massive data generated at the edge by Internet of Things (IoT) devices.
One major challenge preventing the wide adoption of FL in IoT is the pervasive power supply constraints of IoT devices.
We propose FedLE, an energy-efficient client selection framework that enables extension of edge IoT networks.
arXiv Detail & Related papers (2023-02-14T19:41:24Z) - Enhancing Efficiency in Multidevice Federated Learning through Data Selection [11.67484476827617]
Federated learning (FL) in multidevice environments creates new opportunities to learn from a vast and diverse amount of private data.
In this paper, we develop an FL framework to incorporate on-device data selection on such constrained devices.
We show that our framework achieves 19% higher accuracy and 58% lower latency; compared to the baseline FL without our implemented strategies.
arXiv Detail & Related papers (2022-11-08T11:39:17Z) - Online Data Selection for Federated Learning with Limited Storage [53.46789303416799]
Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices.
The impact of on-device storage on the performance of FL is still not explored.
In this work, we take the first step to consider the online data selection for FL with limited on-device storage.
arXiv Detail & Related papers (2022-09-01T03:27:33Z) - Multi-Edge Server-Assisted Dynamic Federated Learning with an Optimized
Floating Aggregation Point [51.47520726446029]
cooperative edge learning (CE-FL) is a distributed machine learning architecture.
We model the processes taken during CE-FL, and conduct analytical training.
We show the effectiveness of our framework with the data collected from a real-world testbed.
arXiv Detail & Related papers (2022-03-26T00:41:57Z) - perf4sight: A toolflow to model CNN training performance on Edge GPUs [16.61258138725983]
This work proposes perf4sight, an automated methodology for developing accurate models that predict CNN training memory footprint and latency.
With PyTorch as the framework and NVIDIA Jetson TX2 as the target device, the developed models predict training memory footprint and latency with 95% and 91% accuracy respectively.
arXiv Detail & Related papers (2021-08-12T07:55:37Z) - FedAR: Activity and Resource-Aware Federated Learning Model for
Distributed Mobile Robots [1.332560004325655]
A recently proposed Machine Learning algorithm called Federated Learning (FL) paves the path towards preserving data privacy.
This paper proposes an FL model by monitoring client activities and leveraging available local computing resources.
We consider such mobile robots as FL clients to understand their resource-constrained behavior in a real-world setting.
arXiv Detail & Related papers (2021-01-11T05:27:37Z) - Wireless Communications for Collaborative Federated Learning [160.82696473996566]
Internet of Things (IoT) devices may not be able to transmit their collected data to a central controller for training machine learning models.
Google's seminal FL algorithm requires all devices to be directly connected with a central controller.
This paper introduces a novel FL framework, called collaborative FL (CFL), which enables edge devices to implement FL with less reliance on a central controller.
arXiv Detail & Related papers (2020-06-03T20:00:02Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.