BCEdge: SLO-Aware DNN Inference Services with Adaptive Batching on Edge
Platforms
- URL: http://arxiv.org/abs/2305.01519v1
- Date: Mon, 1 May 2023 02:56:43 GMT
- Title: BCEdge: SLO-Aware DNN Inference Services with Adaptive Batching on Edge
Platforms
- Authors: Ziyang Zhang, Huan Li, Yang Zhao, Changyao Lin, and Jie Liu
- Abstract summary: Deep neural networks (DNNs) are being applied to a wide range of edge intelligent applications.
It is critical for edge inference platforms to have both high-latency and low-latency.
This paper proposes BCEdge, a novel learning-based scheduling framework.
- Score: 12.095934624748686
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As deep neural networks (DNNs) are being applied to a wide range of edge
intelligent applications, it is critical for edge inference platforms to have
both high-throughput and low-latency at the same time. Such edge platforms with
multiple DNN models pose new challenges for scheduler designs. First, each
request may have different service level objectives (SLOs) to improve quality
of service (QoS). Second, the edge platforms should be able to efficiently
schedule multiple heterogeneous DNN models so that system utilization can be
improved. To meet these two goals, this paper proposes BCEdge, a novel
learning-based scheduling framework that takes adaptive batching and concurrent
execution of DNN inference services on edge platforms. We define a utility
function to evaluate the trade-off between throughput and latency. The
scheduler in BCEdge leverages maximum entropy-based deep reinforcement learning
(DRL) to maximize utility by 1) co-optimizing batch size and 2) the number of
concurrent models automatically. Our prototype implemented on different edge
platforms shows that the proposed BCEdge enhances utility by up to 37.6% on
average, compared to state-of-the-art solutions, while satisfying SLOs.
Related papers
- FusionLLM: A Decentralized LLM Training System on Geo-distributed GPUs with Adaptive Compression [55.992528247880685]
Decentralized training faces significant challenges regarding system design and efficiency.
We present FusionLLM, a decentralized training system designed and implemented for training large deep neural networks (DNNs)
We show that our system and method can achieve 1.45 - 9.39x speedup compared to baseline methods while ensuring convergence.
arXiv Detail & Related papers (2024-10-16T16:13:19Z) - Edge AI as a Service with Coordinated Deep Neural Networks [0.24578723416255746]
CoDE aims to find the optimal path, which is the path with the highest possible reward, by creating multi-task DNNs from individual models.
Experiments show that CoDE enhances the inference throughput and, achieves higher precision compared to a state-of-the-art existing method.
arXiv Detail & Related papers (2024-01-01T01:54:53Z) - Sparse-DySta: Sparsity-Aware Dynamic and Static Scheduling for Sparse
Multi-DNN Workloads [65.47816359465155]
Running multiple deep neural networks (DNNs) in parallel has become an emerging workload in both edge devices.
We propose Dysta, a novel scheduler that utilizes both static sparsity patterns and dynamic sparsity information for the sparse multi-DNN scheduling.
Our proposed approach outperforms the state-of-the-art methods with up to 10% decrease in latency constraint violation rate and nearly 4X reduction in average normalized turnaround time.
arXiv Detail & Related papers (2023-10-17T09:25:17Z) - A hybrid deep-learning-metaheuristic framework for bi-level network
design problems [2.741266294612776]
This study proposes a hybrid deep-learning-metaheuristic framework with a bi-level architecture for road network design problems (NDPs)
We train a graph neural network (GNN) to approximate the solution of the user equilibrium (UE) traffic assignment problem.
We use inferences made by the trained model to calculate fitness function evaluations of a genetic algorithm (GA) to approximate solutions for NDPs.
arXiv Detail & Related papers (2023-03-10T16:23:56Z) - Scheduling Inference Workloads on Distributed Edge Clusters with
Reinforcement Learning [11.007816552466952]
This paper focuses on the problem of scheduling inference queries on Deep Neural Networks in edge networks at short timescales.
By means of simulations, we analyze several policies in the realistic network settings and workloads of a large ISP.
We design ASET, a Reinforcement Learning based scheduling algorithm able to adapt its decisions according to the system conditions.
arXiv Detail & Related papers (2023-01-31T13:23:34Z) - Edge-MultiAI: Multi-Tenancy of Latency-Sensitive Deep Learning
Applications on Edge [10.067877168224337]
This research aims to overcome the memory contention challenge to meet the latency constraints of the Deep Learning applications.
We propose an efficient NN model management framework, called Edge-MultiAI, that ushers the NN models of the DL applications into the edge memory.
We show that Edge-MultiAI can stimulate the degree of multi-tenancy on the edge by at least 2X and increase the number of warm-starts by around 60% without any major loss on the inference accuracy of the applications.
arXiv Detail & Related papers (2022-11-14T06:17:32Z) - GNN at the Edge: Cost-Efficient Graph Neural Network Processing over
Distributed Edge Servers [24.109721494781592]
Graph Neural Networks (GNNs) are still under exploration, presenting a stark disparity to its broad edge adoptions.
This paper studies the cost optimization for distributed GNN processing over a multi-tier heterogeneous edge network.
We show that our approach achieves superior performance over de facto baselines with more than 95.8% cost eduction in a fast convergence speed.
arXiv Detail & Related papers (2022-10-31T13:03:16Z) - Fluid Batching: Exit-Aware Preemptive Serving of Early-Exit Neural
Networks on Edge NPUs [74.83613252825754]
"smart ecosystems" are being formed where sensing happens concurrently rather than standalone.
This is shifting the on-device inference paradigm towards deploying neural processing units (NPUs) at the edge.
We propose a novel early-exit scheduling that allows preemption at run time to account for the dynamicity introduced by the arrival and exiting processes.
arXiv Detail & Related papers (2022-09-27T15:04:01Z) - Recurrent Bilinear Optimization for Binary Neural Networks [58.972212365275595]
BNNs neglect the intrinsic bilinear relationship of real-valued weights and scale factors.
Our work is the first attempt to optimize BNNs from the bilinear perspective.
We obtain robust RBONNs, which show impressive performance over state-of-the-art BNNs on various models and datasets.
arXiv Detail & Related papers (2022-09-04T06:45:33Z) - Receptive Field-based Segmentation for Distributed CNN Inference
Acceleration in Collaborative Edge Computing [93.67044879636093]
We study inference acceleration using distributed convolutional neural networks (CNNs) in collaborative edge computing network.
We propose a novel collaborative edge computing using fused-layer parallelization to partition a CNN model into multiple blocks of convolutional layers.
arXiv Detail & Related papers (2022-07-22T18:38:11Z) - BLK-REW: A Unified Block-based DNN Pruning Framework using Reweighted
Regularization Method [69.49386965992464]
We propose a new block-based pruning framework that comprises a general and flexible structured pruning dimension as well as a powerful and efficient reweighted regularization method.
Our framework is universal, which can be applied to both CNNs and RNNs, implying complete support for the two major kinds ofintensive computation layers.
It is the first time that the weight pruning framework achieves universal coverage for both CNNs and RNNs with real-time mobile acceleration and no accuracy compromise.
arXiv Detail & Related papers (2020-01-23T03:30:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.