DynaMIX: Resource Optimization for DNN-Based Real-Time Applications on a
Multi-Tasking System
- URL: http://arxiv.org/abs/2302.01568v1
- Date: Fri, 3 Feb 2023 06:33:28 GMT
- Title: DynaMIX: Resource Optimization for DNN-Based Real-Time Applications on a
Multi-Tasking System
- Authors: Minkyoung Cho and Kang G. Shin
- Abstract summary: More and more deep neural networks (DNNs) have been developed and deployed on autonomous vehicles (AVs)
To meet their growing expectations and requirements, AVs should "optimize" use of their limited onboard computing resources for multiple concurrent in-vehicle apps.
We propose Dynamix, which optimize the resource requirement of concurrent apps and aims to maximize execution accuracy.
- Score: 20.882393722208608
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As deep neural networks (DNNs) prove their importance and feasibility, more
and more DNN-based apps, such as detection and classification of objects, have
been developed and deployed on autonomous vehicles (AVs). To meet their growing
expectations and requirements, AVs should "optimize" use of their limited
onboard computing resources for multiple concurrent in-vehicle apps while
satisfying their timing requirements (especially for safety). That is,
real-time AV apps should share the limited on-board resources with other
concurrent apps without missing their deadlines dictated by the frame rate of a
camera that generates and provides input images to the apps. However, most, if
not all, of existing DNN solutions focus on enhancing the concurrency of their
specific hardware without dynamically optimizing/modifying the DNN apps'
resource requirements, subject to the number of running apps, owing to their
high computational cost. To mitigate this limitation, we propose DynaMIX
(Dynamic MIXed-precision model construction), which optimizes the resource
requirement of concurrent apps and aims to maximize execution accuracy. To
realize a real-time resource optimization, we formulate an optimization problem
using app performance profiles to consider both the accuracy and worst-case
latency of each app. We also propose dynamic model reconfiguration by lazy
loading only the selected layers at runtime to reduce the overhead of loading
the entire model. DynaMIX is evaluated in terms of constraint satisfaction and
inference accuracy for a multi-tasking system and compared against
state-of-the-art solutions, demonstrating its effectiveness and feasibility
under various environmental/operating conditions.
Related papers
- CARIn: Constraint-Aware and Responsive Inference on Heterogeneous Devices for Single- and Multi-DNN Workloads [4.556037016746581]
This article addresses the challenges inherent in optimising the execution of deep neural networks (DNNs) on mobile devices.
We introduce CARIn, a novel framework designed for the optimised deployment of both single- and multi-DNN applications.
We observe a substantial enhancement in the fair treatment of the problem's objectives, reaching 1.92x when compared to single-model designs and up to 10.69x in contrast to the state-of-the-art OODIn framework.
arXiv Detail & Related papers (2024-09-02T09:18:11Z) - DNN Partitioning, Task Offloading, and Resource Allocation in Dynamic Vehicular Networks: A Lyapunov-Guided Diffusion-Based Reinforcement Learning Approach [49.56404236394601]
We formulate the problem of joint DNN partitioning, task offloading, and resource allocation in Vehicular Edge Computing.
Our objective is to minimize the DNN-based task completion time while guaranteeing the system stability over time.
We propose a Multi-Agent Diffusion-based Deep Reinforcement Learning (MAD2RL) algorithm, incorporating the innovative use of diffusion models.
arXiv Detail & Related papers (2024-06-11T06:31:03Z) - Context-aware Multi-Model Object Detection for Diversely Heterogeneous
Compute Systems [0.32634122554914]
One-size-fits-all approach to object detection using deep neural networks (DNNs) leads to inefficient utilization of computational resources.
We propose SHIFT which continuously selects from a variety of DNN-based OD models depending on the dynamically changing contextual information and computational constraints.
Our proposed methodology results in improvements of up to 7.5x in energy usage and 2.8x in latency compared to state-of-the-art GPU-based single model OD approaches.
arXiv Detail & Related papers (2024-02-12T05:38:11Z) - Sparse-DySta: Sparsity-Aware Dynamic and Static Scheduling for Sparse
Multi-DNN Workloads [65.47816359465155]
Running multiple deep neural networks (DNNs) in parallel has become an emerging workload in both edge devices.
We propose Dysta, a novel scheduler that utilizes both static sparsity patterns and dynamic sparsity information for the sparse multi-DNN scheduling.
Our proposed approach outperforms the state-of-the-art methods with up to 10% decrease in latency constraint violation rate and nearly 4X reduction in average normalized turnaround time.
arXiv Detail & Related papers (2023-10-17T09:25:17Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - PLiNIO: A User-Friendly Library of Gradient-based Methods for
Complexity-aware DNN Optimization [3.460496851517031]
PLiNIO is an open-source library implementing a comprehensive set of state-of-the-art DNN design automation techniques.
We show that PLiNIO achieves up to 94.34% memory reduction for a 1% accuracy drop compared to a baseline architecture.
arXiv Detail & Related papers (2023-07-18T07:11:14Z) - Energy-efficient Task Adaptation for NLP Edge Inference Leveraging
Heterogeneous Memory Architectures [68.91874045918112]
adapter-ALBERT is an efficient model optimization for maximal data reuse across different tasks.
We demonstrate the advantage of mapping the model to a heterogeneous on-chip memory architecture by performing simulations on a validated NLP edge accelerator.
arXiv Detail & Related papers (2023-03-25T14:40:59Z) - U-Boost NAS: Utilization-Boosted Differentiable Neural Architecture
Search [50.33956216274694]
optimizing resource utilization in target platforms is key to achieving high performance during DNN inference.
We propose a novel hardware-aware NAS framework that does not only optimize for task accuracy and inference latency, but also for resource utilization.
We achieve 2.8 - 4x speedup for DNN inference compared to prior hardware-aware NAS methods.
arXiv Detail & Related papers (2022-03-23T13:44:15Z) - Joint Multi-User DNN Partitioning and Computational Resource Allocation
for Collaborative Edge Intelligence [21.55340197267767]
Mobile Edge Computing (MEC) has emerged as a promising supporting architecture providing a variety of resources to the network edge.
With the assistance of edge servers, user equipments (UEs) are able to run deep neural network (DNN) based AI applications.
We propose an algorithm called Iterative Alternating Optimization (IAO) that can achieve the optimal solution in time.
arXiv Detail & Related papers (2020-07-15T09:40:13Z) - PatDNN: Achieving Real-Time DNN Execution on Mobile Devices with
Pattern-based Weight Pruning [57.20262984116752]
We introduce a new dimension, fine-grained pruning patterns inside the coarse-grained structures, revealing a previously unknown point in design space.
With the higher accuracy enabled by fine-grained pruning patterns, the unique insight is to use the compiler to re-gain and guarantee high hardware efficiency.
arXiv Detail & Related papers (2020-01-01T04:52:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.