DeepRT: A Soft Real Time Scheduler for Computer Vision Applications on
the Edge
- URL: http://arxiv.org/abs/2105.01803v1
- Date: Wed, 5 May 2021 00:08:17 GMT
- Title: DeepRT: A Soft Real Time Scheduler for Computer Vision Applications on
the Edge
- Authors: Zhe Yang, Klara Nahrstedt, Hongpeng Guo, Qian Zhou
- Abstract summary: This paper focuses on applications which make soft real time requests to perform inference on their data.
DeepRT provides latency guarantee to the requests while maintaining high overall system throughput.
Our evaluation results show that DeepRT outperforms state-of-the-art works in terms of the number of deadline misses and throughput.
- Score: 17.725750510361884
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ubiquity of smartphone cameras and IoT cameras, together with the recent
boom of deep learning and deep neural networks, proliferate various computer
vision driven mobile and IoT applications deployed on the edge. This paper
focuses on applications which make soft real time requests to perform inference
on their data - they desire prompt responses within designated deadlines, but
occasional deadline misses are acceptable. Supporting soft real time
applications on a multi-tenant edge server is not easy, since the requests
sharing the limited GPU computing resources of an edge server interfere with
each other. In order to tackle this problem, we comprehensively evaluate how
latency and throughput respond to different GPU execution plans. Based on this
analysis, we propose a GPU scheduler, DeepRT, which provides latency guarantee
to the requests while maintaining high overall system throughput. The key
component of DeepRT, DisBatcher, batches data from different requests as much
as possible while it is proven to provide latency guarantee for requests
admitted by an Admission Control Module. DeepRT also includes an Adaptation
Module which tackles overruns. Our evaluation results show that DeepRT
outperforms state-of-the-art works in terms of the number of deadline misses
and throughput.
Related papers
- SGPRS: Seamless GPU Partitioning Real-Time Scheduler for Periodic Deep Learning Workloads [0.9898607871253774]
We propose SGPRS, the first real-time GPU scheduler considering zero configuration partition switch.
The proposed scheduler not only meets more deadlines for parallel tasks but also sustains overall performance beyond the pivot point.
arXiv Detail & Related papers (2024-04-13T18:29:26Z) - Scheduling Inference Workloads on Distributed Edge Clusters with
Reinforcement Learning [11.007816552466952]
This paper focuses on the problem of scheduling inference queries on Deep Neural Networks in edge networks at short timescales.
By means of simulations, we analyze several policies in the realistic network settings and workloads of a large ISP.
We design ASET, a Reinforcement Learning based scheduling algorithm able to adapt its decisions according to the system conditions.
arXiv Detail & Related papers (2023-01-31T13:23:34Z) - Deep Reinforcement Learning for Trajectory Path Planning and Distributed
Inference in Resource-Constrained UAV Swarms [6.649753747542209]
This work aims to design a model for distributed collaborative inference requests and path planning in a UAV swarm.
The formulated problem is NP-hard so finding the optimal solution is quite complex.
We conduct extensive simulations and compare our results to the-state-of-the-art studies demonstrating that our model outperforms the competing models.
arXiv Detail & Related papers (2022-12-21T17:16:42Z) - MAPLE-X: Latency Prediction with Explicit Microprocessor Prior Knowledge [87.41163540910854]
Deep neural network (DNN) latency characterization is a time-consuming process.
We propose MAPLE-X which extends MAPLE by incorporating explicit prior knowledge of hardware devices and DNN architecture latency.
arXiv Detail & Related papers (2022-05-25T11:08:20Z) - MAPLE-Edge: A Runtime Latency Predictor for Edge Devices [80.01591186546793]
We propose MAPLE-Edge, an edge device-oriented extension of MAPLE, the state-of-the-art latency predictor for general purpose hardware.
Compared to MAPLE, MAPLE-Edge can describe the runtime and target device platform using a much smaller set of CPU performance counters.
We also demonstrate that unlike MAPLE which performs best when trained on a pool of devices sharing a common runtime, MAPLE-Edge can effectively generalize across runtimes.
arXiv Detail & Related papers (2022-04-27T14:00:48Z) - Real-Time GPU-Accelerated Machine Learning Based Multiuser Detection for
5G and Beyond [70.81551587109833]
nonlinear beamforming filters can significantly outperform linear approaches in stationary scenarios with massive connectivity.
One of the main challenges comes from the real-time implementation of these algorithms.
This paper explores the acceleration of APSM-based algorithms through massive parallelization.
arXiv Detail & Related papers (2022-01-13T15:20:45Z) - StrObe: Streaming Object Detection from LiDAR Packets [73.27333924964306]
Rolling shutter LiDARs emitted as a stream of packets, each covering a sector of the 360deg coverage.
Modern perception algorithms wait for the full sweep to be built before processing the data, which introduces an additional latency.
In this paper we propose StrObe, a novel approach that minimizes latency by ingesting LiDAR packets and emitting a stream of detections without waiting for the full sweep to be built.
arXiv Detail & Related papers (2020-11-12T14:57:44Z) - Scheduling Real-time Deep Learning Services as Imprecise Computations [11.611969843191433]
The paper presents an efficient real-time scheduling algorithm for intelligent real-time edge services.
These services perform machine intelligence tasks, such as voice recognition, LIDAR processing, or machine vision.
We show that deep neural network can be cast as imprecise computations, each with a mandatory part and several optional parts.
arXiv Detail & Related papers (2020-11-02T16:43:04Z) - LC-NAS: Latency Constrained Neural Architecture Search for Point Cloud
Networks [73.78551758828294]
LC-NAS is able to find state-of-the-art architectures for point cloud classification with minimal computational cost.
We show how our searched architectures achieve any desired latency with a reasonably low drop in accuracy.
arXiv Detail & Related papers (2020-08-24T10:30:21Z) - Dynamic Compression Ratio Selection for Edge Inference Systems with Hard
Deadlines [9.585931043664363]
We propose a dynamic compression ratio selection scheme for edge inference system with hard deadlines.
Information augmentation that retransmits less compressed data of task with erroneous inference is proposed to enhance the accuracy performance.
Considering the wireless transmission errors, we further design a retransmission scheme to reduce performance degradation due to packet losses.
arXiv Detail & Related papers (2020-05-25T17:11:53Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.