DynO: Dynamic Onloading of Deep Neural Networks from Cloud to Device
- URL: http://arxiv.org/abs/2104.09949v1
- Date: Tue, 20 Apr 2021 13:20:15 GMT
- Title: DynO: Dynamic Onloading of Deep Neural Networks from Cloud to Device
- Authors: Mario Almeida, Stefanos Laskaridis, Stylianos I. Venieris, Ilias
Leontiadis, Nicholas D. Lane
- Abstract summary: We present DynO, a distributed inference framework that combines the best of both worlds to address several challenges.
We show that DynO outperforms the current state-of-the-art, improving throughput by over an order of magnitude over device-only execution.
- Score: 17.43467167013752
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, there has been an explosive growth of mobile and embedded
applications using convolutional neural networks(CNNs). To alleviate their
excessive computational demands, developers have traditionally resorted to
cloud offloading, inducing high infrastructure costs and a strong dependence on
networking conditions. On the other end, the emergence of powerful SoCs is
gradually enabling on-device execution. Nonetheless, low- and mid-tier
platforms still struggle to run state-of-the-art CNNs sufficiently. In this
paper, we present DynO, a distributed inference framework that combines the
best of both worlds to address several challenges, such as device
heterogeneity, varying bandwidth and multi-objective requirements. Key
components that enable this are its novel CNN-specific data packing method,
which exploits the variability of precision needs in different parts of the CNN
when onloading computation, and its novel scheduler that jointly tunes the
partition point and transferred data precision at run time to adapt inference
to its execution environment. Quantitative evaluation shows that DynO
outperforms the current state-of-the-art, improving throughput by over an order
of magnitude over device-only execution and up to 7.9x over competing CNN
offloading systems, with up to 60x less data transferred.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Transferability of Convolutional Neural Networks in Stationary Learning
Tasks [96.00428692404354]
We introduce a novel framework for efficient training of convolutional neural networks (CNNs) for large-scale spatial problems.
We show that a CNN trained on small windows of such signals achieves a nearly performance on much larger windows without retraining.
Our results show that the CNN is able to tackle problems with many hundreds of agents after being trained with fewer than ten.
arXiv Detail & Related papers (2023-07-21T13:51:45Z) - Slimmable Encoders for Flexible Split DNNs in Bandwidth and Resource
Constrained IoT Systems [12.427821850039448]
We propose a novel split computing approach based on slimmable ensemble encoders.
The key advantage of our design is the ability to adapt computational load and transmitted data size in real-time with minimal overhead and time.
Our model outperforms existing solutions in terms of compression efficacy and execution time, especially in the context of weak mobile devices.
arXiv Detail & Related papers (2023-06-22T06:33:12Z) - Fluid Batching: Exit-Aware Preemptive Serving of Early-Exit Neural
Networks on Edge NPUs [74.83613252825754]
"smart ecosystems" are being formed where sensing happens concurrently rather than standalone.
This is shifting the on-device inference paradigm towards deploying neural processing units (NPUs) at the edge.
We propose a novel early-exit scheduling that allows preemption at run time to account for the dynamicity introduced by the arrival and exiting processes.
arXiv Detail & Related papers (2022-09-27T15:04:01Z) - Dynamic Split Computing for Efficient Deep Edge Intelligence [78.4233915447056]
We introduce dynamic split computing, where the optimal split location is dynamically selected based on the state of the communication channel.
We show that dynamic split computing achieves faster inference in edge computing environments where the data rate and server load vary over time.
arXiv Detail & Related papers (2022-05-23T12:35:18Z) - An Adaptive Device-Edge Co-Inference Framework Based on Soft
Actor-Critic [72.35307086274912]
High-dimension parameter model and large-scale mathematical calculation restrict execution efficiency, especially for Internet of Things (IoT) devices.
We propose a new Deep Reinforcement Learning (DRL)-Soft Actor Critic for discrete (SAC-d), which generates the emphexit point, emphexit point, and emphcompressing bits by soft policy iterations.
Based on the latency and accuracy aware reward design, such an computation can well adapt to the complex environment like dynamic wireless channel and arbitrary processing, and is capable of supporting the 5G URL
arXiv Detail & Related papers (2022-01-09T09:31:50Z) - FTPipeHD: A Fault-Tolerant Pipeline-Parallel Distributed Training
Framework for Heterogeneous Edge Devices [21.513786638743234]
FTPipeHD is a novel framework that trains deep learning models across heterogeneous devices.
It is shown that FTPipeHD is 6.8x faster in training than the state of the art method when the computing capacity of the best device is 10x greater than the worst one.
arXiv Detail & Related papers (2021-10-06T14:00:22Z) - SPINN: Synergistic Progressive Inference of Neural Networks over Device
and Cloud [13.315410752311768]
A popular alternative comprises offloading CNN processing to powerful cloud-based servers.
SPINN is a distributed inference system that employs synergistic device-cloud together with a progressive inference method.
It provides robust operation under uncertain connectivity conditions and significant energy savings compared to cloud-centric execution.
arXiv Detail & Related papers (2020-08-14T15:00:19Z) - PatDNN: Achieving Real-Time DNN Execution on Mobile Devices with
Pattern-based Weight Pruning [57.20262984116752]
We introduce a new dimension, fine-grained pruning patterns inside the coarse-grained structures, revealing a previously unknown point in design space.
With the higher accuracy enabled by fine-grained pruning patterns, the unique insight is to use the compiler to re-gain and guarantee high hardware efficiency.
arXiv Detail & Related papers (2020-01-01T04:52:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.