Improving the Performance of DNN-based Software Services using Automated
Layer Caching
- URL: http://arxiv.org/abs/2209.08625v1
- Date: Sun, 18 Sep 2022 18:21:20 GMT
- Title: Improving the Performance of DNN-based Software Services using Automated
Layer Caching
- Authors: Mohammadamin Abedi, Yanni Iouannou, Pooyan Jamshidi, Hadi Hemmati
- Abstract summary: Deep Neural Networks (DNNs) have become an essential component in many application domains including web-based services.
The computational complexity in such large models can still be relatively significant, hindering low inference latency.
In this paper, we propose an end-to-end automated solution to improve the performance of DNN-based services.
- Score: 3.804240190982695
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Networks (DNNs) have become an essential component in many
application domains including web-based services. A variety of these services
require high throughput and (close to) real-time features, for instance, to
respond or react to users' requests or to process a stream of incoming data on
time. However, the trend in DNN design is toward larger models with many layers
and parameters to achieve more accurate results. Although these models are
often pre-trained, the computational complexity in such large models can still
be relatively significant, hindering low inference latency. Implementing a
caching mechanism is a typical systems engineering solution for speeding up a
service response time. However, traditional caching is often not suitable for
DNN-based services. In this paper, we propose an end-to-end automated solution
to improve the performance of DNN-based services in terms of their
computational complexity and inference latency. Our caching method adopts the
ideas of self-distillation of DNN models and early exits. The proposed solution
is an automated online layer caching mechanism that allows early exiting of a
large model during inference time if the cache model in one of the early exits
is confident enough for final prediction. One of the main contributions of this
paper is that we have implemented the idea as an online caching, meaning that
the cache models do not need access to training data and perform solely based
on the incoming data at run-time, making it suitable for applications using
pre-trained models. Our experiments results on two downstream tasks (face and
object classification) show that, on average, caching can reduce the
computational complexity of those services up to 58\% (in terms of FLOPs count)
and improve their inference latency up to 46\% with low to zero reduction in
accuracy.
Related papers
- Sparse-DySta: Sparsity-Aware Dynamic and Static Scheduling for Sparse
Multi-DNN Workloads [65.47816359465155]
Running multiple deep neural networks (DNNs) in parallel has become an emerging workload in both edge devices.
We propose Dysta, a novel scheduler that utilizes both static sparsity patterns and dynamic sparsity information for the sparse multi-DNN scheduling.
Our proposed approach outperforms the state-of-the-art methods with up to 10% decrease in latency constraint violation rate and nearly 4X reduction in average normalized turnaround time.
arXiv Detail & Related papers (2023-10-17T09:25:17Z) - Transferability of Convolutional Neural Networks in Stationary Learning
Tasks [96.00428692404354]
We introduce a novel framework for efficient training of convolutional neural networks (CNNs) for large-scale spatial problems.
We show that a CNN trained on small windows of such signals achieves a nearly performance on much larger windows without retraining.
Our results show that the CNN is able to tackle problems with many hundreds of agents after being trained with fewer than ten.
arXiv Detail & Related papers (2023-07-21T13:51:45Z) - Adaptive Scheduling for Edge-Assisted DNN Serving [6.437829777289881]
This paper examines how to speed up the edge server processing for multiple clients using deep neural networks.
We first design a novel scheduling algorithm to exploit the benefits of all requests that run the same DNN.
We then extend our algorithm to handle requests that use different DNNs with or without shared layers.
arXiv Detail & Related papers (2023-04-19T20:46:50Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Fluid Batching: Exit-Aware Preemptive Serving of Early-Exit Neural
Networks on Edge NPUs [74.83613252825754]
"smart ecosystems" are being formed where sensing happens concurrently rather than standalone.
This is shifting the on-device inference paradigm towards deploying neural processing units (NPUs) at the edge.
We propose a novel early-exit scheduling that allows preemption at run time to account for the dynamicity introduced by the arrival and exiting processes.
arXiv Detail & Related papers (2022-09-27T15:04:01Z) - Accelerating Deep Learning Classification with Error-controlled
Approximate-key Caching [72.50506500576746]
We propose a novel caching paradigm, that we named approximate-key caching.
While approximate cache hits alleviate DL inference workload and increase the system throughput, they however introduce an approximation error.
We analytically model our caching system performance for classic LRU and ideal caches, we perform a trace-driven evaluation of the expected performance, and we compare the benefits of our proposed approach with the state-of-the-art similarity caching.
arXiv Detail & Related papers (2021-12-13T13:49:11Z) - Learning from Images: Proactive Caching with Parallel Convolutional
Neural Networks [94.85780721466816]
A novel framework for proactive caching is proposed in this paper.
It combines model-based optimization with data-driven techniques by transforming an optimization problem into a grayscale image.
Numerical results show that the proposed scheme can reduce 71.6% computation time with only 0.8% additional performance cost.
arXiv Detail & Related papers (2021-08-15T21:32:47Z) - Accelerating Deep Learning Inference via Learned Caches [11.617579969991294]
Deep Neural Networks (DNNs) are witnessing increased adoption in multiple domains owing to their high accuracy in solving real-world problems.
Current low latency solutions trade-off on accuracy or fail to exploit the inherent temporal locality in prediction serving workloads.
We present the design of GATI, an end-to-end prediction serving system that incorporates learned caches for low-latency inference.
arXiv Detail & Related papers (2021-01-18T22:13:08Z) - CacheNet: A Model Caching Framework for Deep Learning Inference on the
Edge [3.398008512297358]
CacheNet is a model caching framework for machine perception applications.
It caches low-complexity models on end devices and high-complexity (or full) models on edge or cloud servers.
It is 58-217% faster than baseline approaches that run inference tasks on end devices or edge servers alone.
arXiv Detail & Related papers (2020-07-03T16:32:14Z) - Serving DNNs like Clockwork: Performance Predictability from the Bottom
Up [4.293235171619925]
Machine learning inference is becoming a core building block for interactive web applications.
Existing model serving architectures use well-known reactive techniques to alleviate common-case sources of latency.
We observe that inference using Deep Neural Network (DNN) models has deterministic performance.
arXiv Detail & Related papers (2020-06-03T18:18:45Z) - Accelerating Deep Learning Inference via Freezing [8.521443408415868]
We present Freeze Inference, a system that introduces approximate caching at each intermediate layer.
We find that this can potentially reduce the number of effective layers by half for 91.58% of CIFAR-10 requests run on ResNet-18.
arXiv Detail & Related papers (2020-02-07T07:03:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.