ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models
- URL: http://arxiv.org/abs/2105.03176v1
- Date: Fri, 7 May 2021 11:39:05 GMT
- Title: ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models
- Authors: Matthias Wess, Matvey Ivanov, Anvesh Nookala, Christoph Unger,
Alexander Wendt, Axel Jantsch
- Abstract summary: We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
- Score: 56.21470608621633
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With new accelerator hardware for DNN, the computing power for AI
applications has increased rapidly. However, as DNN algorithms become more
complex and optimized for specific applications, latency requirements remain
challenging, and it is critical to find the optimal points in the design space.
To decouple the architectural search from the target hardware, we propose a
time estimation framework that allows for modeling the inference latency of
DNNs on hardware accelerators based on mapping and layer-wise estimation
models. The proposed methodology extracts a set of models from micro-kernel and
multi-layer benchmarks and generates a stacked model for mapping and network
execution time estimation. We compare estimation accuracy and fidelity of the
generated mixed models, statistical models with the roofline model, and a
refined roofline model for evaluation. We test the mixed models on the ZCU102
SoC board with DNNDK and Intel Neural Compute Stick 2 on a set of 12
state-of-the-art neural networks. It shows an average estimation error of 3.47%
for the DNNDK and 7.44% for the NCS2, outperforming the statistical and
analytical layer models for almost all selected networks. For a randomly
selected subset of 34 networks of the NASBench dataset, the mixed model reaches
fidelity of 0.988 in Spearman's rank correlation coefficient metric. The code
of ANNETTE is publicly available at
https://github.com/embedded-machine-learning/annette.
Related papers
- A model for multi-attack classification to improve intrusion detection
performance using deep learning approaches [0.0]
The objective here is to create a reliable intrusion detection mechanism to help identify malicious attacks.
Deep learning based solution framework is developed consisting of three approaches.
The first approach is Long-Short Term Memory Recurrent Neural Network (LSTM-RNN) with seven functions such as adamax, SGD, adagrad, adam, RMSprop, nadam and adadelta.
The models self-learnt the features and classifies the attack classes as multi-attack classification.
arXiv Detail & Related papers (2023-10-25T05:38:44Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - Neural Attentive Circuits [93.95502541529115]
We introduce a general purpose, yet modular neural architecture called Neural Attentive Circuits (NACs)
NACs learn the parameterization and a sparse connectivity of neural modules without using domain knowledge.
NACs achieve an 8x speedup at inference time while losing less than 3% performance.
arXiv Detail & Related papers (2022-10-14T18:00:07Z) - Model Architecture Adaption for Bayesian Neural Networks [9.978961706999833]
We show a novel network architecture search (NAS) that optimize BNNs for both accuracy and uncertainty.
In our experiments, the searched models show comparable uncertainty ability and accuracy compared to the state-of-the-art (deep ensemble)
arXiv Detail & Related papers (2022-02-09T10:58:50Z) - JMSNAS: Joint Model Split and Neural Architecture Search for Learning
over Mobile Edge Networks [23.230079759174902]
Joint model split and neural architecture search (JMSNAS) framework is proposed to automatically generate and deploy a DNN model over a mobile edge network.
Considering both the computing and communication resource constraints, a computational graph search problem is formulated.
Experiment results confirm the superiority of the proposed framework over the state-of-the-art split machine learning design methods.
arXiv Detail & Related papers (2021-11-16T03:10:23Z) - Self-Learning for Received Signal Strength Map Reconstruction with
Neural Architecture Search [63.39818029362661]
We present a model based on Neural Architecture Search (NAS) and self-learning for received signal strength ( RSS) map reconstruction.
The approach first finds an optimal NN architecture and simultaneously train the deduced model over some ground-truth measurements of a given ( RSS) map.
Experimental results show that signal predictions of this second model outperforms non-learning based state-of-the-art techniques and NN models with no architecture search.
arXiv Detail & Related papers (2021-05-17T12:19:22Z) - Balancing Accuracy and Latency in Multipath Neural Networks [0.09668407688201358]
We use a one-shot neural architecture search model to implicitly evaluate the performance of an intractable number of neural networks.
We show that our method can accurately model the relative performance between models with different latencies and predict the performance of unseen models with good precision across different datasets.
arXiv Detail & Related papers (2021-04-25T00:05:48Z) - NL-CNN: A Resources-Constrained Deep Learning Model based on Nonlinear
Convolution [0.0]
A novel convolution neural network model, abbreviated NL-CNN, is proposed, where nonlinear convolution is emulated in a cascade of convolution + nonlinearity layers.
Performance evaluation for several widely known datasets is provided, showing several relevant features.
arXiv Detail & Related papers (2021-01-30T13:38:42Z) - Neural Architecture Search For LF-MMI Trained Time Delay Neural Networks [61.76338096980383]
A range of neural architecture search (NAS) techniques are used to automatically learn two types of hyper- parameters of state-of-the-art factored time delay neural networks (TDNNs)
These include the DARTS method integrating architecture selection with lattice-free MMI (LF-MMI) TDNN training.
Experiments conducted on a 300-hour Switchboard corpus suggest the auto-configured systems consistently outperform the baseline LF-MMI TDNN systems.
arXiv Detail & Related papers (2020-07-17T08:32:11Z) - Contextual-Bandit Anomaly Detection for IoT Data in Distributed
Hierarchical Edge Computing [65.78881372074983]
IoT devices can hardly afford complex deep neural networks (DNN) models, and offloading anomaly detection tasks to the cloud incurs long delay.
We propose and build a demo for an adaptive anomaly detection approach for distributed hierarchical edge computing (HEC) systems.
We show that our proposed approach significantly reduces detection delay without sacrificing accuracy, as compared to offloading detection tasks to the cloud.
arXiv Detail & Related papers (2020-04-15T06:13:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.