Performance Prediction for Convolutional Neural Networks in Edge Devices
- URL: http://arxiv.org/abs/2010.11297v1
- Date: Wed, 21 Oct 2020 20:21:25 GMT
- Title: Performance Prediction for Convolutional Neural Networks in Edge Devices
- Authors: Halima Bouzidi, Hamza Ouarnoughi, Smail Niar and Abdessamad Ait El
Cadi
- Abstract summary: Convolutional Neural Network (CNN) based applications on edge devices near the source of data can meet the latency and privacy challenges.
We present and compare five (5) of the widely used Machine Learning based methods for execution time prediction of CNNs on two (2) edge GPU platforms.
- Score: 0.8602553195689513
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Running Convolutional Neural Network (CNN) based applications on edge devices
near the source of data can meet the latency and privacy challenges. However
due to their reduced computing resources and their energy constraints, these
edge devices can hardly satisfy CNN needs in processing and data storage. For
these platforms, choosing the CNN with the best trade-off between accuracy and
execution time while respecting Hardware constraints is crucial. In this paper,
we present and compare five (5) of the widely used Machine Learning based
methods for execution time prediction of CNNs on two (2) edge GPU platforms.
For these 5 methods, we also explore the time needed for their training and
tuning their corresponding hyperparameters. Finally, we compare times to run
the prediction models on different platforms. The utilization of these methods
will highly facilitate design space exploration by providing quickly the best
CNN on a target edge GPU. Experimental results show that eXtreme Gradient
Boosting (XGBoost) provides a less than 14.73% average prediction error even
for unexplored and unseen CNN models' architectures. Random Forest (RF) depicts
comparable accuracy but needs more effort and time to be trained. The other 3
approaches (OLS, MLP and SVR) are less accurate for CNN performances
estimation.
Related papers
- OA-CNNs: Omni-Adaptive Sparse CNNs for 3D Semantic Segmentation [70.17681136234202]
We reexamine the design distinctions and test the limits of what a sparse CNN can achieve.
We propose two key components, i.e., adaptive receptive fields (spatially) and adaptive relation, to bridge the gap.
This exploration led to the creation of Omni-Adaptive 3D CNNs (OA-CNNs), a family of networks that integrates a lightweight module.
arXiv Detail & Related papers (2024-03-21T14:06:38Z) - Transferability of Convolutional Neural Networks in Stationary Learning
Tasks [96.00428692404354]
We introduce a novel framework for efficient training of convolutional neural networks (CNNs) for large-scale spatial problems.
We show that a CNN trained on small windows of such signals achieves a nearly performance on much larger windows without retraining.
Our results show that the CNN is able to tackle problems with many hundreds of agents after being trained with fewer than ten.
arXiv Detail & Related papers (2023-07-21T13:51:45Z) - An Efficient Evolutionary Deep Learning Framework Based on Multi-source
Transfer Learning to Evolve Deep Convolutional Neural Networks [8.40112153818812]
Convolutional neural networks (CNNs) have constantly achieved better performance over years by introducing more complex topology, and enlarging the capacity towards deeper and wider CNNs.
The computational cost is still the bottleneck of automatically designing CNNs.
In this paper, inspired by transfer learning, a new evolutionary computation based framework is proposed to efficiently evolve CNNs.
arXiv Detail & Related papers (2022-12-07T20:22:58Z) - AutoDiCE: Fully Automated Distributed CNN Inference at the Edge [0.9883261192383613]
We propose a novel framework, called AutoDiCE, for automated splitting of a CNN model into a set of sub-models.
Our experimental results show that AutoDiCE can deliver distributed CNN inference with reduced energy consumption and memory usage per edge device.
arXiv Detail & Related papers (2022-07-20T15:08:52Z) - Benchmarking Test-Time Unsupervised Deep Neural Network Adaptation on
Edge Devices [19.335535517714703]
The prediction accuracy of the deep neural networks (DNNs) after deployment at the edge can suffer with time due to shifts in the distribution of the new data.
Recent prediction-time unsupervised DNN adaptation techniques have been introduced that improve prediction accuracy of the models for noisy data by re-tuning the batch normalization parameters.
This paper, for the first time, performs a comprehensive measurement study of such techniques to quantify their performance and energy on various edge devices.
arXiv Detail & Related papers (2022-03-21T19:10:40Z) - Boggart: Accelerating Retrospective Video Analytics via Model-Agnostic
Ingest Processing [5.076419064097734]
Boggart is a retrospective video analytics system that delivers ingest-time speedups in a model-agnostic manner.
Our underlying insight is that traditional computer vision (CV) algorithms are capable of performing computations that can be used to accelerate diverse queries with wide-ranging CNNs.
At query-time, Boggart uses several novel techniques to collect the smallest sample of CNN results required to meet the target accuracy.
arXiv Detail & Related papers (2021-06-21T19:21:16Z) - Continual 3D Convolutional Neural Networks for Real-time Processing of
Videos [93.73198973454944]
We introduce Continual 3D Contemporalal Neural Networks (Co3D CNNs)
Co3D CNNs process videos frame-by-frame rather than by clip by clip.
We show that Co3D CNNs initialised on the weights from preexisting state-of-the-art video recognition models reduce floating point operations for frame-wise computations by 10.0-12.4x while improving accuracy on Kinetics-400 by 2.3-3.8x.
arXiv Detail & Related papers (2021-05-31T18:30:52Z) - MoViNets: Mobile Video Networks for Efficient Video Recognition [52.49314494202433]
3D convolutional neural networks (CNNs) are accurate at video recognition but require large computation and memory budgets.
We propose a three-step approach to improve computational efficiency while substantially reducing the peak memory usage of 3D CNNs.
arXiv Detail & Related papers (2021-03-21T23:06:38Z) - RT3D: Achieving Real-Time Execution of 3D Convolutional Neural Networks
on Mobile Devices [57.877112704841366]
This paper proposes RT3D, a model compression and mobile acceleration framework for 3D CNNs.
For the first time, real-time execution of 3D CNNs is achieved on off-the-shelf mobiles.
arXiv Detail & Related papers (2020-07-20T02:05:32Z) - Approximation and Non-parametric Estimation of ResNet-type Convolutional
Neural Networks [52.972605601174955]
We show a ResNet-type CNN can attain the minimax optimal error rates in important function classes.
We derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and H"older classes.
arXiv Detail & Related papers (2019-03-24T19:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.