Benchmarking of CPU-intensive Stream Data Processing in The Edge Computing Systems
- URL: http://arxiv.org/abs/2505.07755v1
- Date: Mon, 12 May 2025 17:02:02 GMT
- Title: Benchmarking of CPU-intensive Stream Data Processing in The Edge Computing Systems
- Authors: Tomasz Szydlo, Viacheslaw Horbanow, Dev Nandan Jha, Shashikant Ilager, Aleksander Slominski, Rajiv Ranjan,
- Abstract summary: This paper evaluates the power consumption and performance characteristics of a single processing node within an edge cluster using a synthetic microbenchmark.<n>Results show how an optimal measure can lead to optimized usage of edge resources, given both performance and power consumption.
- Score: 41.19058376513831
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Edge computing has emerged as a pivotal technology, offering significant advantages such as low latency, enhanced data security, and reduced reliance on centralized cloud infrastructure. These benefits are crucial for applications requiring real-time data processing or strict security measures. Despite these advantages, edge devices operating within edge clusters are often underutilized. This inefficiency is mainly due to the absence of a holistic performance profiling mechanism which can help dynamically adjust the desired system configuration for a given workload. Since edge computing environments involve a complex interplay between CPU frequency, power consumption, and application performance, a deeper understanding of these correlations is essential. By uncovering these relationships, it becomes possible to make informed decisions that enhance both computational efficiency and energy savings. To address this gap, this paper evaluates the power consumption and performance characteristics of a single processing node within an edge cluster using a synthetic microbenchmark by varying the workload size and CPU frequency. The results show how an optimal measure can lead to optimized usage of edge resources, given both performance and power consumption.
Related papers
- The Larger the Merrier? Efficient Large AI Model Inference in Wireless Edge Networks [56.37880529653111]
The demand for large computation model (LAIM) services is driving a paradigm shift from traditional cloud-based inference to edge-based inference for low-latency, privacy-preserving applications.<n>In this paper, we investigate the LAIM-inference scheme, where a pre-trained LAIM is pruned and partitioned into on-device and on-server sub-models for deployment.
arXiv Detail & Related papers (2025-05-14T08:18:55Z) - EdgeMLBalancer: A Self-Adaptive Approach for Dynamic Model Switching on Resource-Constrained Edge Devices [0.0]
Machine learning on edge devices has enabled real-time AI applications in resource-constrained environments.<n>Existing solutions for managing computational resources often focus narrowly on accuracy or energy efficiency.<n>We propose a self-adaptive approach that optimize CPU utilization and resource management on edge devices.
arXiv Detail & Related papers (2025-02-10T14:11:29Z) - DynaSplit: A Hardware-Software Co-Design Framework for Energy-Aware Inference on Edge [40.96858640950632]
We propose DynaSplit, a framework that dynamically configures parameters across both software and hardware.
We evaluate DynaSplit using popular pre-trained NNs on a real-world testbed.
Results show a reduction in energy consumption up to 72% compared to cloud-only computation.
arXiv Detail & Related papers (2024-10-31T12:44:07Z) - EdgeOL: Efficient in-situ Online Learning on Edge Devices [51.86178757050963]
We propose EdgeOL, an edge online learning framework that optimize inference accuracy, fine-tuning execution time, and energy efficiency.<n> Experimental results show that, on average, EdgeOL reduces overall fine-tuning execution time by 64%, energy consumption by 52%, and improves average inference accuracy by 1.75% over the immediate online learning strategy.
arXiv Detail & Related papers (2024-01-30T02:41:05Z) - FLEdge: Benchmarking Federated Machine Learning Applications in Edge Computing Systems [61.335229621081346]
Federated Learning (FL) has become a viable technique for realizing privacy-enhancing distributed deep learning on the network edge.
In this paper, we propose FLEdge, which complements existing FL benchmarks by enabling a systematic evaluation of client capabilities.
arXiv Detail & Related papers (2023-06-08T13:11:20Z) - Energy-efficient Task Adaptation for NLP Edge Inference Leveraging
Heterogeneous Memory Architectures [68.91874045918112]
adapter-ALBERT is an efficient model optimization for maximal data reuse across different tasks.
We demonstrate the advantage of mapping the model to a heterogeneous on-chip memory architecture by performing simulations on a validated NLP edge accelerator.
arXiv Detail & Related papers (2023-03-25T14:40:59Z) - SMDP-Based Dynamic Batching for Efficient Inference on GPU-Based
Platforms [14.42787221783853]
This paper aims to provide a dynamic graphics policy that strikes a balance between efficiency and latency.
The proposed solution has notable flexibility in balancing power consumption and latency.
arXiv Detail & Related papers (2023-01-30T13:19:16Z) - Balancing Performance and Energy Consumption of Bagging Ensembles for
the Classification of Data Streams in Edge Computing [9.801387036837871]
Edge Computing (EC) has emerged as an enabling factor for developing technologies like the Internet of Things (IoT) and 5G networks.
This work investigates strategies for optimizing the performance and energy consumption of bagging ensembles to classify data streams.
arXiv Detail & Related papers (2022-01-17T04:12:18Z) - Ps and Qs: Quantization-aware pruning for efficient low latency neural
network inference [56.24109486973292]
We study the interplay between pruning and quantization during the training of neural networks for ultra low latency applications.
We find that quantization-aware pruning yields more computationally efficient models than either pruning or quantization alone for our task.
arXiv Detail & Related papers (2021-02-22T19:00:05Z) - Towards AIOps in Edge Computing Environments [60.27785717687999]
This paper describes the system design of an AIOps platform which is applicable in heterogeneous, distributed environments.
It is feasible to collect metrics with a high frequency and simultaneously run specific anomaly detection algorithms directly on edge devices.
arXiv Detail & Related papers (2021-02-12T09:33:00Z) - AutoScale: Optimizing Energy Efficiency of End-to-End Edge Inference
under Stochastic Variance [11.093360539563657]
AutoScale is an adaptive and light-weight execution scaling engine built upon the custom-designed reinforcement learning algorithm.
This paper proposes AutoScale to enable accurate, energy-efficient deep learning inference at the edge.
arXiv Detail & Related papers (2020-05-06T00:30:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.