SmartDet: Context-Aware Dynamic Control of Edge Task Offloading for
Mobile Object Detection
- URL: http://arxiv.org/abs/2201.04235v1
- Date: Tue, 11 Jan 2022 23:01:35 GMT
- Title: SmartDet: Context-Aware Dynamic Control of Edge Task Offloading for
Mobile Object Detection
- Authors: Davide Callegaro and Francesco Restuccia and Marco Levorato
- Abstract summary: Mobile devices increasingly rely on object detection (OD) through deep neural networks (DNNs) to perform critical tasks.
Low-complexity object tracking (OT) can be used with OD, where the latter is periodically applied to generate "fresh" references for tracking.
We propose parallel OT (at the mobile device) and OD (at the edge server) processes that are resilient to large OD latency.
- Score: 19.106380479438172
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mobile devices increasingly rely on object detection (OD) through deep neural
networks (DNNs) to perform critical tasks. Due to their high complexity, the
execution of these DNNs requires excessive time and energy. Low-complexity
object tracking (OT) can be used with OD, where the latter is periodically
applied to generate "fresh" references for tracking. However, the frames
processed with OD incur large delays, which may make the reference outdated and
degrade tracking quality. Herein, we propose to use edge computing in this
context, and establish parallel OT (at the mobile device) and OD (at the edge
server) processes that are resilient to large OD latency. We propose Katch-Up,
a novel tracking mechanism that improves the system resilience to excessive OD
delay. However, while Katch-Up significantly improves performance, it also
increases the computing load of the mobile device. Hence, we design SmartDet, a
low-complexity controller based on deep reinforcement learning (DRL) that
learns controlling the trade-off between resource utilization and OD
performance. SmartDet takes as input context-related information related to the
current video content and the current network conditions to optimize frequency
and type of OD offloading, as well as Katch-Up utilization. We extensively
evaluate SmartDet on a real-world testbed composed of a JetSon Nano as mobile
device and a GTX 980 Ti as edge server, connected through a Wi-Fi link.
Experimental results show that SmartDet achieves an optimal balance between
tracking performance - mean Average Recall (mAR) and resource usage. With
respect to a baseline with full Katch-Upusage and maximum channel usage, we
still increase mAR by 4% while using 50% less of the channel and 30% power
resources associated with Katch-Up. With respect to a fixed strategy using
minimal resources, we increase mAR by 20% while using Katch-Up on 1/3 of the
frames.
Related papers
- Accelerating Linear Recurrent Neural Networks for the Edge with Unstructured Sparsity [39.483346492111515]
Linear recurrent neural networks enable powerful long-range sequence modeling with constant memory usage and time-per-token during inference.
Unstructured sparsity offers a compelling solution, enabling substantial reductions in compute and memory requirements when accelerated by compatible hardware platforms.
We find that highly sparse linear RNNs consistently achieve better efficiency-performance trade-offs than dense baselines.
arXiv Detail & Related papers (2025-02-03T13:09:21Z) - USEFUSE: Utile Stride for Enhanced Performance in Fused Layer Architecture of Deep Neural Networks [0.6435156676256051]
This study presents the Sum-of-Products (SOP) units for convolution, which utilize low-latency left-to-right bit-serial arithmetic.
An effective mechanism detects and skips inefficient convolutions after ReLU layers, minimizing power consumption.
Two designs cater to varied demands: one focuses on minimal response time for mission-critical applications, and another focuses on resource-constrained devices with comparable latency.
arXiv Detail & Related papers (2024-12-18T11:04:58Z) - Adaptive Quantization Resolution and Power Control for Federated Learning over Cell-free Networks [41.23236059700041]
Federated learning (FL) is a distributed learning framework where users train a global model by exchanging local model updates with a server instead of raw datasets.
Cell-free massive multiple-input multipleoutput (CFmMIMO) is a promising solution to serve numerous users on the same time/frequency resource with similar rates.
In this paper, we co-optimize the physical layer with the FL application to mitigate the straggler effect.
arXiv Detail & Related papers (2024-12-14T16:08:05Z) - Efficient Multi-Object Tracking on Edge Devices via Reconstruction-Based Channel Pruning [0.2302001830524133]
We propose a neural network pruning method specifically tailored to compress complex networks, such as those used in modern MOT systems.
We achieve model size reductions of up to 70% while maintaining a high level of accuracy and further improving performance on the Jetson Orin Nano.
arXiv Detail & Related papers (2024-10-11T12:37:42Z) - Data Overfitting for On-Device Super-Resolution with Dynamic Algorithm and Compiler Co-Design [18.57172631588624]
We propose a Dynamic Deep neural network assisted by a Content-Aware data processing pipeline to reduce the model number down to one.
Our method achieves better PSNR and real-time performance (33 FPS) on an off-the-shelf mobile phone.
arXiv Detail & Related papers (2024-07-03T05:17:26Z) - Exploring Dynamic Transformer for Efficient Object Tracking [58.120191254379854]
We propose DyTrack, a dynamic transformer framework for efficient tracking.
DyTrack automatically learns to configure proper reasoning routes for various inputs, gaining better utilization of the available computational budget.
Experiments on multiple benchmarks demonstrate that DyTrack achieves promising speed-precision trade-offs with only a single model.
arXiv Detail & Related papers (2024-03-26T12:31:58Z) - PNAS-MOT: Multi-Modal Object Tracking with Pareto Neural Architecture Search [64.28335667655129]
Multiple object tracking is a critical task in autonomous driving.
As tracking accuracy improves, neural networks become increasingly complex, posing challenges for their practical application in real driving scenarios due to the high level of latency.
In this paper, we explore the use of the neural architecture search (NAS) methods to search for efficient architectures for tracking, aiming for low real-time latency while maintaining relatively high accuracy.
arXiv Detail & Related papers (2024-03-23T04:18:49Z) - Context-aware Multi-Model Object Detection for Diversely Heterogeneous
Compute Systems [0.32634122554914]
One-size-fits-all approach to object detection using deep neural networks (DNNs) leads to inefficient utilization of computational resources.
We propose SHIFT which continuously selects from a variety of DNN-based OD models depending on the dynamically changing contextual information and computational constraints.
Our proposed methodology results in improvements of up to 7.5x in energy usage and 2.8x in latency compared to state-of-the-art GPU-based single model OD approaches.
arXiv Detail & Related papers (2024-02-12T05:38:11Z) - An Intelligent Deterministic Scheduling Method for Ultra-Low Latency
Communication in Edge Enabled Industrial Internet of Things [19.277349546331557]
Time Sensitive Network (TSN) is recently researched to realize low latency communication via deterministic scheduling.
Non-collision theory based deterministic scheduling (NDS) method is proposed to achieve ultra-low latency communication for the time-sensitive flows.
Experiment results demonstrate that NDS/DQS can well support deterministic ultra-low latency services and guarantee efficient bandwidth utilization.
arXiv Detail & Related papers (2022-07-17T16:52:51Z) - MAPLE-Edge: A Runtime Latency Predictor for Edge Devices [80.01591186546793]
We propose MAPLE-Edge, an edge device-oriented extension of MAPLE, the state-of-the-art latency predictor for general purpose hardware.
Compared to MAPLE, MAPLE-Edge can describe the runtime and target device platform using a much smaller set of CPU performance counters.
We also demonstrate that unlike MAPLE which performs best when trained on a pool of devices sharing a common runtime, MAPLE-Edge can effectively generalize across runtimes.
arXiv Detail & Related papers (2022-04-27T14:00:48Z) - Energy-Efficient Model Compression and Splitting for Collaborative
Inference Over Time-Varying Channels [52.60092598312894]
We propose a technique to reduce the total energy bill at the edge device by utilizing model compression and time-varying model split between the edge and remote nodes.
Our proposed solution results in minimal energy consumption and $CO$ emission compared to the considered baselines.
arXiv Detail & Related papers (2021-06-02T07:36:27Z) - WNARS: WFST based Non-autoregressive Streaming End-to-End Speech
Recognition [59.975078145303605]
We propose a novel framework, namely WNARS, using hybrid CTC-attention AED models and weighted finite-state transducers.
On the AISHELL-1 task, our WNARS achieves a character error rate of 5.22% with 640ms latency, to the best of our knowledge, which is the state-of-the-art performance for online ASR.
arXiv Detail & Related papers (2021-04-08T07:56:03Z) - FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation [81.76975488010213]
Dense optical flow estimation plays a key role in many robotic vision tasks.
Current networks often occupy large number of parameters and require heavy computation costs.
Our proposed FastFlowNet works in the well-known coarse-to-fine manner with following innovations.
arXiv Detail & Related papers (2021-03-08T03:09:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.