Survey on Computer Vision Techniques for Internet-of-Things Devices
- URL: http://arxiv.org/abs/2308.02553v1
- Date: Wed, 2 Aug 2023 03:41:24 GMT
- Title: Survey on Computer Vision Techniques for Internet-of-Things Devices
- Authors: Ishmeet Kaur and Adwaita Janardhan Jadhav
- Abstract summary: Deep neural networks (DNNs) are state-of-the-art techniques for solving computer vision problems.
DNNs require billions of parameters and operations to achieve state-of-the-art results.
This requirement makes DNNs extremely compute, memory, and energy-hungry, and consequently difficult to deploy on small battery-powered Internet-of-Things (IoT) devices with limited computing resources.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks (DNNs) are state-of-the-art techniques for solving most
computer vision problems. DNNs require billions of parameters and operations to
achieve state-of-the-art results. This requirement makes DNNs extremely
compute, memory, and energy-hungry, and consequently difficult to deploy on
small battery-powered Internet-of-Things (IoT) devices with limited computing
resources. Deployment of DNNs on Internet-of-Things devices, such as traffic
cameras, can improve public safety by enabling applications such as automatic
accident detection and emergency response.Through this paper, we survey the
recent advances in low-power and energy-efficient DNN implementations that
improve the deployability of DNNs without significantly sacrificing accuracy.
In general, these techniques either reduce the memory requirements, the number
of arithmetic operations, or both. The techniques can be divided into three
major categories: neural network compression, network architecture search and
design, and compiler and graph optimizations. In this paper, we survey both
low-power techniques for both convolutional and transformer DNNs, and summarize
the advantages, disadvantages, and open research problems.
Related papers
- Energy-Efficient Deployment of Machine Learning Workloads on
Neuromorphic Hardware [0.11744028458220425]
Several edge deep learning hardware accelerators have been released that specifically focus on reducing the power and area consumed by deep neural networks (DNNs)
Spiked neural networks (SNNs) which operate on discrete time-series data have been shown to achieve substantial power reductions when deployed on specialized neuromorphic event-based/asynchronous hardware.
In this work, we provide a general guide to converting pre-trained DNNs into SNNs while also presenting techniques to improve the deployment of converted SNNs on neuromorphic hardware.
arXiv Detail & Related papers (2022-10-10T20:27:19Z) - Sparsifying Binary Networks [3.8350038566047426]
Binary neural networks (BNNs) have demonstrated their ability to solve complex tasks with comparable accuracy as full-precision deep neural networks (DNNs)
Despite the recent improvements, they suffer from a fixed and limited compression factor that may result insufficient for certain devices with very limited resources.
We propose sparse binary neural networks (SBNNs), a novel model and training scheme which introduces sparsity in BNNs and a new quantization function for binarizing the network's weights.
arXiv Detail & Related papers (2022-07-11T15:54:41Z) - Hardware Approximate Techniques for Deep Neural Network Accelerators: A
Survey [4.856755747052137]
Deep Neural Networks (DNNs) are very popular because of their high performance in various cognitive tasks in Machine Learning (ML)
Recent advancements in DNNs have brought beyond human accuracy in many tasks, but at the cost of high computational complexity.
This article provides a comprehensive survey and analysis of hardware approximation techniques for DNN accelerators.
arXiv Detail & Related papers (2022-03-16T16:33:13Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - Compacting Deep Neural Networks for Internet of Things: Methods and
Applications [14.611047945621511]
Deep Neural Networks (DNNs) have shown great success in completing complex tasks.
DNNs inevitably bring high computational cost and storage consumption due to the complexity of hierarchical structures.
This paper presents a comprehensive study on compacting-DNNs technologies.
arXiv Detail & Related papers (2021-03-20T03:18:42Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - A Survey of Methods for Low-Power Deep Learning and Computer Vision [0.4234843176066353]
Deep neural networks (DNNs) are successful in many computer vision tasks.
The most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive.
Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy.
arXiv Detail & Related papers (2020-03-24T18:47:24Z) - PatDNN: Achieving Real-Time DNN Execution on Mobile Devices with
Pattern-based Weight Pruning [57.20262984116752]
We introduce a new dimension, fine-grained pruning patterns inside the coarse-grained structures, revealing a previously unknown point in design space.
With the higher accuracy enabled by fine-grained pruning patterns, the unique insight is to use the compiler to re-gain and guarantee high hardware efficiency.
arXiv Detail & Related papers (2020-01-01T04:52:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.