Sparse-DySta: Sparsity-Aware Dynamic and Static Scheduling for Sparse
Multi-DNN Workloads
- URL: http://arxiv.org/abs/2310.11096v1
- Date: Tue, 17 Oct 2023 09:25:17 GMT
- Title: Sparse-DySta: Sparsity-Aware Dynamic and Static Scheduling for Sparse
Multi-DNN Workloads
- Authors: Hongxiang Fan, Stylianos I. Venieris, Alexandros Kouris, Nicholas D.
Lane
- Abstract summary: Running multiple deep neural networks (DNNs) in parallel has become an emerging workload in both edge devices.
We propose Dysta, a novel scheduler that utilizes both static sparsity patterns and dynamic sparsity information for the sparse multi-DNN scheduling.
Our proposed approach outperforms the state-of-the-art methods with up to 10% decrease in latency constraint violation rate and nearly 4X reduction in average normalized turnaround time.
- Score: 65.47816359465155
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Running multiple deep neural networks (DNNs) in parallel has become an
emerging workload in both edge devices, such as mobile phones where multiple
tasks serve a single user for daily activities, and data centers, where various
requests are raised from millions of users, as seen with large language models.
To reduce the costly computational and memory requirements of these workloads,
various efficient sparsification approaches have been introduced, resulting in
widespread sparsity across different types of DNN models. In this context,
there is an emerging need for scheduling sparse multi-DNN workloads, a problem
that is largely unexplored in previous literature. This paper systematically
analyses the use-cases of multiple sparse DNNs and investigates the
opportunities for optimizations. Based on these findings, we propose Dysta, a
novel bi-level dynamic and static scheduler that utilizes both static sparsity
patterns and dynamic sparsity information for the sparse multi-DNN scheduling.
Both static and dynamic components of Dysta are jointly designed at the
software and hardware levels, respectively, to improve and refine the
scheduling approach. To facilitate future progress in the study of this class
of workloads, we construct a public benchmark that contains sparse multi-DNN
workloads across different deployment scenarios, spanning from mobile phones
and AR/VR wearables to data centers. A comprehensive evaluation on the sparse
multi-DNN benchmark demonstrates that our proposed approach outperforms the
state-of-the-art methods with up to 10% decrease in latency constraint
violation rate and nearly 4X reduction in average normalized turnaround time.
Our artifacts and code are publicly available at:
https://github.com/SamsungLabs/Sparse-Multi-DNN-Scheduling.
Related papers
- SoD$^2$: Statically Optimizing Dynamic Deep Neural Network [13.958672527377722]
SoD$2$ is a comprehensive framework for optimizing Dynamic DNNs.
This framework statically determines the shapes of operators as known constants, symbolic constants, or operations on these.
We show that SoD$2$ runs up to $3.9times$ faster than these systems while saving up to $88%$ peak memory consumption.
arXiv Detail & Related papers (2024-02-29T23:04:01Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - DOMINO: Domain-invariant Hyperdimensional Classification for
Multi-Sensor Time Series Data [14.434647668734184]
We propose DOMINO, a novel HDC learning framework addressing the distribution shift problem in noisy multi-sensor time-series data.
DOMINO achieves on average 2.04% higher accuracy than state-of-the-art (SOTA) DNN-based domain generalization techniques, and delivers 16.34x faster training and 2.89x faster inference.
arXiv Detail & Related papers (2023-08-07T04:44:12Z) - Combining Multi-Objective Bayesian Optimization with Reinforcement Learning for TinyML [4.2019872499238256]
We propose a novel strategy for deploying Deep Neural Networks on microcontrollers (TinyML) based on Multi-Objective Bayesian optimization (MOBOpt)
Our methodology aims at efficiently finding tradeoffs between a DNN's predictive accuracy, memory consumption on a given target system, and computational complexity.
arXiv Detail & Related papers (2023-05-23T14:31:52Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Fluid Batching: Exit-Aware Preemptive Serving of Early-Exit Neural
Networks on Edge NPUs [74.83613252825754]
"smart ecosystems" are being formed where sensing happens concurrently rather than standalone.
This is shifting the on-device inference paradigm towards deploying neural processing units (NPUs) at the edge.
We propose a novel early-exit scheduling that allows preemption at run time to account for the dynamicity introduced by the arrival and exiting processes.
arXiv Detail & Related papers (2022-09-27T15:04:01Z) - A Low-Complexity Approach to Rate-Distortion Optimized Variable Bit-Rate
Compression for Split DNN Computing [5.3221129103999125]
Split computing has emerged as a recent paradigm for implementation of DNN-based AI workloads.
We present an approach that addresses the challenge of optimizing the rate-accuracy-complexity trade-off.
Our approach is remarkably lightweight, both during training and inference, highly effective and achieves excellent rate-distortion performance.
arXiv Detail & Related papers (2022-08-24T15:02:11Z) - Dynamic Split Computing for Efficient Deep Edge Intelligence [78.4233915447056]
We introduce dynamic split computing, where the optimal split location is dynamically selected based on the state of the communication channel.
We show that dynamic split computing achieves faster inference in edge computing environments where the data rate and server load vary over time.
arXiv Detail & Related papers (2022-05-23T12:35:18Z) - Dynamic Network-Assisted D2D-Aided Coded Distributed Learning [59.29409589861241]
We propose a novel device-to-device (D2D)-aided coded federated learning method (D2D-CFL) for load balancing across devices.
We derive an optimal compression rate for achieving minimum processing time and establish its connection with the convergence time.
Our proposed method is beneficial for real-time collaborative applications, where the users continuously generate training data.
arXiv Detail & Related papers (2021-11-26T18:44:59Z) - Dynamic Sparsity Neural Networks for Automatic Speech Recognition [44.352231175123215]
We present Dynamic Sparsity Neural Networks (DSNN) that, once trained, can instantly switch to any predefined sparsity configuration at run-time.
Our trained DSNN model, therefore, can greatly ease the training process and simplify deployment in diverse scenarios with resource constraints.
arXiv Detail & Related papers (2020-05-16T22:08:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.