Benchmarking Test-Time Unsupervised Deep Neural Network Adaptation on
Edge Devices
- URL: http://arxiv.org/abs/2203.11295v1
- Date: Mon, 21 Mar 2022 19:10:40 GMT
- Title: Benchmarking Test-Time Unsupervised Deep Neural Network Adaptation on
Edge Devices
- Authors: Kshitij Bhardwaj, James Diffenderfer, Bhavya Kailkhura, Maya Gokhale
- Abstract summary: The prediction accuracy of the deep neural networks (DNNs) after deployment at the edge can suffer with time due to shifts in the distribution of the new data.
Recent prediction-time unsupervised DNN adaptation techniques have been introduced that improve prediction accuracy of the models for noisy data by re-tuning the batch normalization parameters.
This paper, for the first time, performs a comprehensive measurement study of such techniques to quantify their performance and energy on various edge devices.
- Score: 19.335535517714703
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The prediction accuracy of the deep neural networks (DNNs) after deployment
at the edge can suffer with time due to shifts in the distribution of the new
data. To improve robustness of DNNs, they must be able to update themselves to
enhance their prediction accuracy. This adaptation at the resource-constrained
edge is challenging as: (i) new labeled data may not be present; (ii)
adaptation needs to be on device as connections to cloud may not be available;
and (iii) the process must not only be fast but also memory- and
energy-efficient. Recently, lightweight prediction-time unsupervised DNN
adaptation techniques have been introduced that improve prediction accuracy of
the models for noisy data by re-tuning the batch normalization (BN) parameters.
This paper, for the first time, performs a comprehensive measurement study of
such techniques to quantify their performance and energy on various edge
devices as well as find bottlenecks and propose optimization opportunities. In
particular, this study considers CIFAR-10-C image classification dataset with
corruptions, three robust DNNs (ResNeXt, Wide-ResNet, ResNet-18), two BN
adaptation algorithms (one that updates normalization statistics and the other
that also optimizes transformation parameters), and three edge devices (FPGA,
Raspberry-Pi, and Nvidia Xavier NX). We find that the approach that only
updates the normalization parameters with Wide-ResNet, running on Xavier GPU,
to be overall effective in terms of balancing multiple cost metrics. However,
the adaptation overhead can still be significant (around 213 ms). The results
strongly motivate the need for algorithm-hardware co-design for efficient
on-device DNN adaptation.
Related papers
- Inference-to-complete: A High-performance and Programmable Data-plane Co-processor for Neural-network-driven Traffic Analysis [18.75879653408466]
NN-driven intelligent data-plane (NN-driven IDP) is becoming an emerging topic for excellent accuracy and high performance.
Kaleidoscope is a flexible and high-performance co-processor located at the bypass of the data-plane.
Kaleidoscope reaches 256-352 ns inference latency and 100 Gbps throughput with negligible influence on the data-plane.
arXiv Detail & Related papers (2024-11-01T07:10:08Z) - Towards Hyperparameter-Agnostic DNN Training via Dynamical System
Insights [4.513581513983453]
We present a first-order optimization method specialized for deep neural networks (DNNs), ECCO-DNN.
This method models the optimization variable trajectory as a dynamical system and develops a discretization algorithm that adaptively selects step sizes based on the trajectory's shape.
arXiv Detail & Related papers (2023-10-21T03:45:13Z) - An Adaptive Device-Edge Co-Inference Framework Based on Soft
Actor-Critic [72.35307086274912]
High-dimension parameter model and large-scale mathematical calculation restrict execution efficiency, especially for Internet of Things (IoT) devices.
We propose a new Deep Reinforcement Learning (DRL)-Soft Actor Critic for discrete (SAC-d), which generates the emphexit point, emphexit point, and emphcompressing bits by soft policy iterations.
Based on the latency and accuracy aware reward design, such an computation can well adapt to the complex environment like dynamic wireless channel and arbitrary processing, and is capable of supporting the 5G URL
arXiv Detail & Related papers (2022-01-09T09:31:50Z) - Learning to Solve the AC-OPF using Sensitivity-Informed Deep Neural
Networks [52.32646357164739]
We propose a deep neural network (DNN) to solve the solutions of the optimal power flow (ACOPF)
The proposed SIDNN is compatible with a broad range of OPF schemes.
It can be seamlessly integrated in other learning-to-OPF schemes.
arXiv Detail & Related papers (2021-03-27T00:45:23Z) - Now that I can see, I can improve: Enabling data-driven finetuning of
CNNs on the edge [11.789983276366987]
This paper provides a first step towards enabling CNN finetuning on an edge device based on structured pruning.
It explores the performance gains and costs of doing so and presents an open-source framework that allows the deployment of such approaches.
arXiv Detail & Related papers (2020-06-15T17:16:45Z) - APQ: Joint Search for Network Architecture, Pruning and Quantization
Policy [49.3037538647714]
We present APQ for efficient deep learning inference on resource-constrained hardware.
Unlike previous methods that separately search the neural architecture, pruning policy, and quantization policy, we optimize them in a joint manner.
With the same accuracy, APQ reduces the latency/energy by 2x/1.3x over MobileNetV2+HAQ.
arXiv Detail & Related papers (2020-06-15T16:09:17Z) - FBNetV3: Joint Architecture-Recipe Search using Predictor Pretraining [65.39532971991778]
We present an accuracy predictor that scores architecture and training recipes jointly, guiding both sample selection and ranking.
We run fast evolutionary searches in just CPU minutes to generate architecture-recipe pairs for a variety of resource constraints.
FBNetV3 makes up a family of state-of-the-art compact neural networks that outperform both automatically and manually-designed competitors.
arXiv Detail & Related papers (2020-06-03T05:20:21Z) - FADNet: A Fast and Accurate Network for Disparity Estimation [18.05392578461659]
We propose an efficient and accurate deep network for disparity estimation named FADNet.
It exploits efficient 2D based correlation layers with stacked blocks to preserve fast computation.
It contains multi-scale predictions so as to exploit a multi-scale weight scheduling training technique to improve the accuracy.
arXiv Detail & Related papers (2020-03-24T10:27:11Z) - A Privacy-Preserving-Oriented DNN Pruning and Mobile Acceleration
Framework [56.57225686288006]
Weight pruning of deep neural networks (DNNs) has been proposed to satisfy the limited storage and computing capability of mobile edge devices.
Previous pruning methods mainly focus on reducing the model size and/or improving performance without considering the privacy of user data.
We propose a privacy-preserving-oriented pruning and mobile acceleration framework that does not require the private training dataset.
arXiv Detail & Related papers (2020-03-13T23:52:03Z) - PatDNN: Achieving Real-Time DNN Execution on Mobile Devices with
Pattern-based Weight Pruning [57.20262984116752]
We introduce a new dimension, fine-grained pruning patterns inside the coarse-grained structures, revealing a previously unknown point in design space.
With the higher accuracy enabled by fine-grained pruning patterns, the unique insight is to use the compiler to re-gain and guarantee high hardware efficiency.
arXiv Detail & Related papers (2020-01-01T04:52:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.