DeepEdgeBench: Benchmarking Deep Neural Networks on Edge Devices
- URL: http://arxiv.org/abs/2108.09457v1
- Date: Sat, 21 Aug 2021 08:13:22 GMT
- Title: DeepEdgeBench: Benchmarking Deep Neural Networks on Edge Devices
- Authors: Stephan Patrick Baller, Anshul Jindal, Mohak Chadha, Michael Gerndt
- Abstract summary: We present and compare the performance in terms of inference time and power consumption of the four Systems on a Chip (SoCs): Asus Tinker Edge R, Raspberry Pi 4, Google Coral Dev Board, Nvidia Jetson Nano, and one microcontroller: Arduino Nano 33 BLE.
For a low fraction of inference time, i.e. less than 29.3% of the time for MobileNetV2, the Jetson Nano performs faster than the other devices.
- Score: 0.6021787236982659
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: EdgeAI (Edge computing based Artificial Intelligence) has been most actively
researched for the last few years to handle variety of massively distributed AI
applications to meet up the strict latency requirements. Meanwhile, many
companies have released edge devices with smaller form factors (low power
consumption and limited resources) like the popular Raspberry Pi and Nvidia's
Jetson Nano for acting as compute nodes at the edge computing environments.
Although the edge devices are limited in terms of computing power and hardware
resources, they are powered by accelerators to enhance their performance
behavior. Therefore, it is interesting to see how AI-based Deep Neural Networks
perform on such devices with limited resources.
In this work, we present and compare the performance in terms of inference
time and power consumption of the four Systems on a Chip (SoCs): Asus Tinker
Edge R, Raspberry Pi 4, Google Coral Dev Board, Nvidia Jetson Nano, and one
microcontroller: Arduino Nano 33 BLE, on different deep learning models and
frameworks. We also provide a method for measuring power consumption, inference
time and accuracy for the devices, which can be easily extended to other
devices. Our results showcase that, for Tensorflow based quantized model, the
Google Coral Dev Board delivers the best performance, both for inference time
and power consumption. For a low fraction of inference computation time, i.e.
less than 29.3% of the time for MobileNetV2, the Jetson Nano performs faster
than the other devices.
Related papers
- A Converting Autoencoder Toward Low-latency and Energy-efficient DNN
Inference at the Edge [4.11949030493552]
We present CBNet, a low-latency and energy-efficient deep neural network (DNN) inference framework tailored for edge devices.
It utilizes a "converting" autoencoder to efficiently transform hard images into easy ones.
CBNet achieves up to 4.8x speedup in inference latency and 79% reduction in energy usage compared to competing techniques.
arXiv Detail & Related papers (2024-03-11T08:13:42Z) - Realtime Facial Expression Recognition: Neuromorphic Hardware vs. Edge AI Accelerators [0.5492530316344587]
The paper focuses on real-time facial expression recognition (FER) systems as an important component in various real-world applications such as social robotics.
We investigate two hardware options for the deployment of FER machine learning (ML) models at the edge: neuromorphic hardware versus edge AI accelerators.
arXiv Detail & Related papers (2024-01-30T16:12:20Z) - Benchmarking Jetson Edge Devices with an End-to-end Video-based Anomaly
Detection System [0.0]
We implement an end-to-end video-based crime-scene anomaly detection system inputting from surveillance videos.
The system is deployed and operates on multiple Jetson edge devices (Nano, AGX Xavier, Orin Nano)
We provide the experience of an AI-based system deployment on various Jetson Edge devices with Docker technology.
arXiv Detail & Related papers (2023-07-28T17:16:57Z) - Fast GraspNeXt: A Fast Self-Attention Neural Network Architecture for
Multi-task Learning in Computer Vision Tasks for Robotic Grasping on the Edge [80.88063189896718]
High architectural and computational complexity can result in poor suitability for deployment on embedded devices.
Fast GraspNeXt is a fast self-attention neural network architecture tailored for embedded multi-task learning in computer vision tasks for robotic grasping.
arXiv Detail & Related papers (2023-04-21T18:07:14Z) - NanoFlowNet: Real-time Dense Optical Flow on a Nano Quadcopter [11.715961583058226]
Nano quadcopters are small, agile, and cheap platforms that are well suited for deployment in narrow, cluttered environments.
Recent machine learning developments promise high-performance perception at low latency.
dedicated edge computing hardware has the potential to augment the processing capabilities of these limited devices.
We present NanoFlowNet, a lightweight convolutional neural network for real-time dense optical flow estimation on edge computing hardware.
arXiv Detail & Related papers (2022-09-14T20:35:51Z) - Braille Letter Reading: A Benchmark for Spatio-Temporal Pattern
Recognition on Neuromorphic Hardware [50.380319968947035]
Recent deep learning approaches have reached accuracy in such tasks, but their implementation on conventional embedded solutions is still computationally very and energy expensive.
We propose a new benchmark for computing tactile pattern recognition at the edge through letters reading.
We trained and compared feed-forward and recurrent spiking neural networks (SNNs) offline using back-propagation through time with surrogate gradients, then we deployed them on the Intel Loihimorphic chip for efficient inference.
Our results show that the LSTM outperforms the recurrent SNN in terms of accuracy by 14%. However, the recurrent SNN on Loihi is 237 times more energy
arXiv Detail & Related papers (2022-05-30T14:30:45Z) - MAPLE-X: Latency Prediction with Explicit Microprocessor Prior Knowledge [87.41163540910854]
Deep neural network (DNN) latency characterization is a time-consuming process.
We propose MAPLE-X which extends MAPLE by incorporating explicit prior knowledge of hardware devices and DNN architecture latency.
arXiv Detail & Related papers (2022-05-25T11:08:20Z) - MAPLE-Edge: A Runtime Latency Predictor for Edge Devices [80.01591186546793]
We propose MAPLE-Edge, an edge device-oriented extension of MAPLE, the state-of-the-art latency predictor for general purpose hardware.
Compared to MAPLE, MAPLE-Edge can describe the runtime and target device platform using a much smaller set of CPU performance counters.
We also demonstrate that unlike MAPLE which performs best when trained on a pool of devices sharing a common runtime, MAPLE-Edge can effectively generalize across runtimes.
arXiv Detail & Related papers (2022-04-27T14:00:48Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - AdderNet and its Minimalist Hardware Design for Energy-Efficient
Artificial Intelligence [111.09105910265154]
We present a novel minimalist hardware architecture using adder convolutional neural network (AdderNet)
The whole AdderNet can practically achieve 16% enhancement in speed.
We conclude the AdderNet is able to surpass all the other competitors.
arXiv Detail & Related papers (2021-01-25T11:31:52Z) - Efficient Neural Network Deployment for Microcontroller [0.0]
This paper is going to explore and generalize convolution neural network deployment for microcontrollers.
The memory savings and performance will be compared with CMSIS-NN framework developed for ARM Cortex-M CPUs.
The final purpose is to develop a tool consuming PyTorch model with trained network weights, and it turns into an optimized inference engine in C/C++ for low memory(kilobyte level) and limited computing capable microcontrollers.
arXiv Detail & Related papers (2020-07-02T19:21:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.