Architecturing Binarized Neural Networks for Traffic Sign Recognition
- URL: http://arxiv.org/abs/2303.15005v1
- Date: Mon, 27 Mar 2023 08:46:31 GMT
- Title: Architecturing Binarized Neural Networks for Traffic Sign Recognition
- Authors: Andreea Postovan and M\u{a}d\u{a}lina Era\c{s}cu
- Abstract summary: Binarized neural networks (BNNs) have shown promising results in computationally limited and energy-constrained devices.
We propose BNNs architectures which achieve more than $90%$ for the German Traffic Sign Recognition Benchmark (GTSRB)
The number of parameters of these architectures varies from 100k to less than 2M.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traffic signs support road safety and managing the flow of traffic, hence are
an integral part of any vision system for autonomous driving. While the use of
deep learning is well-known in traffic signs classification due to the high
accuracy results obtained using convolutional neural networks (CNNs) (state of
the art is 99.46\%), little is known about binarized neural networks (BNNs).
Compared to CNNs, BNNs reduce the model size and simplify convolution
operations and have shown promising results in computationally limited and
energy-constrained devices which appear in the context of autonomous driving.
This work presents a bottom-up approach for architecturing BNNs by studying
characteristics of the constituent layers. These constituent layers (binarized
convolutional layers, max pooling, batch normalization, fully connected layers)
are studied in various combinations and with different values of kernel size,
number of filters and of neurons by using the German Traffic Sign Recognition
Benchmark (GTSRB) for training. As a result, we propose BNNs architectures
which achieve more than $90\%$ for GTSRB (the maximum is $96.45\%$) and an
average greater than $80\%$ (the maximum is $88.99\%$) considering also the
Belgian and Chinese datasets for testing. The number of parameters of these
architectures varies from 100k to less than 2M. The accompanying material of
this paper is publicly available at
https://github.com/apostovan21/BinarizedNeuralNetwork.
Related papers
- NAS-BNN: Neural Architecture Search for Binary Neural Networks [55.058512316210056]
We propose a novel neural architecture search scheme for binary neural networks, named NAS-BNN.
Our discovered binary model family outperforms previous BNNs for a wide range of operations (OPs) from 20M to 200M.
In addition, we validate the transferability of these searched BNNs on the object detection task, and our binary detectors with the searched BNNs achieve a novel state-of-the-art result, e.g., 31.6% mAP with 370M OPs, on MS dataset.
arXiv Detail & Related papers (2024-08-28T02:17:58Z) - ApproxDARTS: Differentiable Neural Architecture Search with Approximate Multipliers [0.24578723416255746]
We present ApproxDARTS, a neural architecture search (NAS) method enabling the popular differentiable neural architecture search method called DARTS to exploit approximate multipliers.
We show that the ApproxDARTS is able to perform a complete architecture search within less than $10$ GPU hours and produce competitive convolutional neural networks (CNN) containing approximate multipliers in convolutional layers.
arXiv Detail & Related papers (2024-04-08T09:54:57Z) - Detection-segmentation convolutional neural network for autonomous
vehicle perception [0.0]
Object detection and segmentation are two core modules of an autonomous vehicle perception system.
Currently, the most commonly used algorithms are based on deep neural networks, which guarantee high efficiency but require high-performance computing platforms.
A reduction in the complexity of the network can be achieved by using an appropriate architecture, representation, and computing platform.
arXiv Detail & Related papers (2023-06-30T08:54:52Z) - Neural Capacitance: A New Perspective of Neural Network Selection via
Edge Dynamics [85.31710759801705]
Current practice requires expensive computational costs in model training for performance prediction.
We propose a novel framework for neural network selection by analyzing the governing dynamics over synaptic connections (edges) during training.
Our framework is built on the fact that back-propagation during neural network training is equivalent to the dynamical evolution of synaptic connections.
arXiv Detail & Related papers (2022-01-11T20:53:15Z) - Sub-bit Neural Networks: Learning to Compress and Accelerate Binary
Neural Networks [72.81092567651395]
Sub-bit Neural Networks (SNNs) are a new type of binary quantization design tailored to compress and accelerate BNNs.
SNNs are trained with a kernel-aware optimization framework, which exploits binary quantization in the fine-grained convolutional kernel space.
Experiments on visual recognition benchmarks and the hardware deployment on FPGA validate the great potentials of SNNs.
arXiv Detail & Related papers (2021-10-18T11:30:29Z) - Binary Complex Neural Network Acceleration on FPGA [19.38270650475235]
Binarized Complex Neural Network (BCNN) shows great potential in classifying complex data in real-time.
We propose a structural pruning based accelerator of BCNN, which is able to provide more than 5000 frames/s inference throughput on edge devices.
arXiv Detail & Related papers (2021-08-10T17:53:30Z) - Training Graph Neural Networks with 1000 Layers [133.84813995275988]
We study reversible connections, group convolutions, weight tying, and equilibrium models to advance the memory and parameter efficiency of GNNs.
To the best of our knowledge, RevGNN-Deep is the deepest GNN in the literature by one order of magnitude.
arXiv Detail & Related papers (2021-06-14T15:03:00Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - Self-Distribution Binary Neural Networks [18.69165083747967]
We study the binary neural networks (BNNs) of which both the weights and activations are binary (i.e., 1-bit representation)
We propose Self-Distribution Binary Neural Network (SD-BNN)
Experiments on CIFAR-10 and ImageNet datasets show that the proposed SD-BNN consistently outperforms the state-of-the-art (SOTA) BNNs.
arXiv Detail & Related papers (2021-03-03T13:39:52Z) - Neural Architecture Search For LF-MMI Trained Time Delay Neural Networks [61.76338096980383]
A range of neural architecture search (NAS) techniques are used to automatically learn two types of hyper- parameters of state-of-the-art factored time delay neural networks (TDNNs)
These include the DARTS method integrating architecture selection with lattice-free MMI (LF-MMI) TDNN training.
Experiments conducted on a 300-hour Switchboard corpus suggest the auto-configured systems consistently outperform the baseline LF-MMI TDNN systems.
arXiv Detail & Related papers (2020-07-17T08:32:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.