Improving IoT Analytics through Selective Edge Execution
- URL: http://arxiv.org/abs/2003.03588v1
- Date: Sat, 7 Mar 2020 15:02:23 GMT
- Title: Improving IoT Analytics through Selective Edge Execution
- Authors: A. Galanopoulos, A. G. Tasiopoulos, G. Iosifidis, T. Salonidis, D. J.
Leith
- Abstract summary: We propose to improve the performance of analytics by leveraging edge infrastructure.
We devise an algorithm that enables the IoT devices to execute their routines locally.
We then outsource them to cloudlet servers, only if they predict they will gain a significant performance improvement.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A large number of emerging IoT applications rely on machine learning routines
for analyzing data. Executing such tasks at the user devices improves response
time and economizes network resources. However, due to power and computing
limitations, the devices often cannot support such resource-intensive routines
and fail to accurately execute the analytics. In this work, we propose to
improve the performance of analytics by leveraging edge infrastructure. We
devise an algorithm that enables the IoT devices to execute their routines
locally; and then outsource them to cloudlet servers, only if they predict they
will gain a significant performance improvement. It uses an approximate dual
subgradient method, making minimal assumptions about the statistical properties
of the system's parameters. Our analysis demonstrates that our proposed
algorithm can intelligently leverage the cloudlet, adapting to the service
requirements.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - MAPLE-X: Latency Prediction with Explicit Microprocessor Prior Knowledge [87.41163540910854]
Deep neural network (DNN) latency characterization is a time-consuming process.
We propose MAPLE-X which extends MAPLE by incorporating explicit prior knowledge of hardware devices and DNN architecture latency.
arXiv Detail & Related papers (2022-05-25T11:08:20Z) - Distributed intelligence on the Edge-to-Cloud Continuum: A systematic
literature review [62.997667081978825]
This review aims at providing a comprehensive vision of the main state-of-the-art libraries and frameworks for machine learning and data analytics available today.
The main simulation, emulation, deployment systems, and testbeds for experimental research on the Edge-to-Cloud Continuum available today are also surveyed.
arXiv Detail & Related papers (2022-04-29T08:06:05Z) - MAPLE-Edge: A Runtime Latency Predictor for Edge Devices [80.01591186546793]
We propose MAPLE-Edge, an edge device-oriented extension of MAPLE, the state-of-the-art latency predictor for general purpose hardware.
Compared to MAPLE, MAPLE-Edge can describe the runtime and target device platform using a much smaller set of CPU performance counters.
We also demonstrate that unlike MAPLE which performs best when trained on a pool of devices sharing a common runtime, MAPLE-Edge can effectively generalize across runtimes.
arXiv Detail & Related papers (2022-04-27T14:00:48Z) - Multi-Component Optimization and Efficient Deployment of Neural-Networks
on Resource-Constrained IoT Hardware [4.6095200019189475]
We present an end-to-end multi-component model optimization sequence and open-source its implementation.
Our optimization components can produce models that are; (i) 12.06 x times compressed; (ii) 0.13% to 0.27% more accurate; (iii) Orders of magnitude faster unit inference at 0.06 ms.
arXiv Detail & Related papers (2022-04-20T13:30:04Z) - Automated Machine Learning Techniques for Data Streams [91.3755431537592]
This paper surveys the state-of-the-art open-source AutoML tools, applies them to data collected from streams, and measures how their performance changes over time.
The results show that off-the-shelf AutoML tools can provide satisfactory results but in the presence of concept drift, detection or adaptation techniques have to be applied to maintain the predictive accuracy over time.
arXiv Detail & Related papers (2021-06-14T11:42:46Z) - Multi-Exit Semantic Segmentation Networks [78.44441236864057]
We propose a framework for converting state-of-the-art segmentation models to MESS networks.
specially trained CNNs that employ parametrised early exits along their depth to save during inference on easier samples.
We co-optimise the number, placement and architecture of the attached segmentation heads, along with the exit policy, to adapt to the device capabilities and application-specific requirements.
arXiv Detail & Related papers (2021-06-07T11:37:03Z) - Measuring what Really Matters: Optimizing Neural Networks for TinyML [7.455546102930911]
neural networks (NNs) have experienced an unprecedented growth in architectural and computational complexity. Introducing NNs to resource-constrained devices enables cost-efficient deployments, widespread availability, and the preservation of sensitive data.
This work addresses the challenges of bringing Machine Learning to MCUs, where we focus on the ubiquitous ARM Cortex-M architecture.
arXiv Detail & Related papers (2021-04-21T17:14:06Z) - Reliable Fleet Analytics for Edge IoT Solutions [0.0]
We propose a framework for facilitating machine learning at the edge for AIoT applications.
The contribution is an architecture that includes services, tools, and methods for delivering fleet analytics at scale.
We present a preliminary validation of the framework by performing experiments with IoT devices on a university campus's rooms.
arXiv Detail & Related papers (2021-01-12T11:28:43Z) - Cost-effective Machine Learning Inference Offload for Edge Computing [0.3149883354098941]
This paper proposes a novel offloading mechanism by leveraging installed-base on-premises (edge) computational resources.
The proposed mechanism allows the edge devices to offload heavy and compute-intensive workloads to edge nodes instead of using remote cloud.
arXiv Detail & Related papers (2020-12-07T21:11:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.