Unsupervised Early Exit in DNNs with Multiple Exits
- URL: http://arxiv.org/abs/2209.09480v1
- Date: Tue, 20 Sep 2022 05:35:54 GMT
- Title: Unsupervised Early Exit in DNNs with Multiple Exits
- Authors: Hari Narayan N U and Manjesh K. Hanawal and Avinash Bhardwaj
- Abstract summary: We focus on Elastic BERT, a pre-trained multi-exit DNN to demonstrate that it nearly' satisfies the Strong Dominance (SD) property.
We empirically validate our algorithm on IMDb and Yelp datasets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep Neural Networks (DNNs) are generally designed as sequentially cascaded
differentiable blocks/layers with a prediction module connected only to its
last layer. DNNs can be attached with prediction modules at multiple points
along the backbone where inference can stop at an intermediary stage without
passing through all the modules. The last exit point may offer a better
prediction error but also involves more computational resources and latency. An
exit point that is `optimal' in terms of both prediction error and cost is
desirable. The optimal exit point may depend on the latent distribution of the
tasks and may change from one task type to another. During neural inference,
the ground truth of instances may not be available and error rates at each exit
point cannot be estimated. Hence one is faced with the problem of selecting the
optimal exit in an unsupervised setting. Prior works tackled this problem in an
offline supervised setting assuming that enough labeled data is available to
estimate the error rate at each exit point and tune the parameters for better
accuracy. However, pre-trained DNNs are often deployed in new domains for which
a large amount of ground truth may not be available. We model the problem of
exit selection as an unsupervised online learning problem and use bandit theory
to identify the optimal exit point. Specifically, we focus on Elastic BERT, a
pre-trained multi-exit DNN to demonstrate that it `nearly' satisfies the Strong
Dominance (SD) property making it possible to learn the optimal exit in an
online setup without knowing the ground truth labels. We develop upper
confidence bound (UCB) based algorithm named UEE-UCB that provably achieves
sub-linear regret under the SD property. Thus our method provides a means to
adaptively learn domain-specific optimal exit points in multi-exit DNNs. We
empirically validate our algorithm on IMDb and Yelp datasets.
Related papers
- Deep Limit Model-free Prediction in Regression [0.0]
We provide a Model-free approach based on Deep Neural Network (DNN) to accomplish point prediction and prediction interval under a general regression setting.
Our method is more stable and accurate compared to other DNN-based counterparts, especially for optimal point predictions.
arXiv Detail & Related papers (2024-08-18T16:37:53Z) - Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection [58.789823426981044]
We propose a novel auxiliary loss formulation that aims to align the class confidence of bounding boxes with the accurateness of predictions.
Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios.
arXiv Detail & Related papers (2023-03-25T08:56:21Z) - Boosted Dynamic Neural Networks [53.559833501288146]
A typical EDNN has multiple prediction heads at different layers of the network backbone.
To optimize the model, these prediction heads together with the network backbone are trained on every batch of training data.
Treating training and testing inputs differently at the two phases will cause the mismatch between training and testing data distributions.
We formulate an EDNN as an additive model inspired by gradient boosting, and propose multiple training techniques to optimize the model effectively.
arXiv Detail & Related papers (2022-11-30T04:23:12Z) - Reducing Flipping Errors in Deep Neural Networks [39.24451665215755]
Deep neural networks (DNNs) have been widely applied in various domains in artificial intelligence.
In this paper, we study how many test (unseen) samples that a DNN misclassifies in the last epoch were ever correctly classified.
We propose to restrict the behavior changes of a DNN on the correctly-classified samples so that the correct local boundaries can be maintained.
arXiv Detail & Related papers (2022-03-16T04:38:06Z) - Generalizing Neural Networks by Reflecting Deviating Data in Production [15.498447555957773]
We present a runtime approach that mitigates DNN mis-predictions caused by unexpected runtime inputs to the DNN.
We use a distribution analyzer based on the distance metric learned by a Siamese network to identify "unseen" semantically-preserving inputs.
Our approach transforms those unexpected inputs into inputs from the training set that are identified as having similar semantics.
arXiv Detail & Related papers (2021-10-06T13:05:45Z) - Early-exit deep neural networks for distorted images: providing an
efficient edge offloading [69.43216268165402]
Edge offloading for deep neural networks (DNNs) can be adaptive to the input's complexity.
We introduce expert side branches trained on a particular distortion type to improve against image distortion.
This approach increases the estimated accuracy on the edge, improving the offloading decisions.
arXiv Detail & Related papers (2021-08-20T19:52:55Z) - Understanding and Improving Early Stopping for Learning with Noisy
Labels [63.0730063791198]
The memorization effect of deep neural network (DNN) plays a pivotal role in many state-of-the-art label-noise learning methods.
Current methods generally decide the early stopping point by considering a DNN as a whole.
We propose to separate a DNN into different parts and progressively train them to address this problem.
arXiv Detail & Related papers (2021-06-30T07:18:00Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.