Highly-Efficient Binary Neural Networks for Visual Place Recognition
- URL: http://arxiv.org/abs/2202.12375v1
- Date: Thu, 24 Feb 2022 22:05:11 GMT
- Title: Highly-Efficient Binary Neural Networks for Visual Place Recognition
- Authors: Bruno Ferrarini, Michael Milford, Klaus D. McDonald-Maier and Shoaib
Ehsan
- Abstract summary: VPR is a fundamental task for autonomous navigation as it enables a robot to localize itself in the workspace when a known location is detected.
CNN-based techniques archive state-of-the-art VPR performance but are computationally intensive and energy demanding.
This paper presents a class of BNNs for VPR that combines depthwise separable factorization and binarization to replace the first convolutional layer.
- Score: 24.674034243725455
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: VPR is a fundamental task for autonomous navigation as it enables a robot to
localize itself in the workspace when a known location is detected. Although
accuracy is an essential requirement for a VPR technique, computational and
energy efficiency are not less important for real-world applications. CNN-based
techniques archive state-of-the-art VPR performance but are computationally
intensive and energy demanding. Binary neural networks (BNN) have been recently
proposed to address VPR efficiently. Although a typical BNN is an order of
magnitude more efficient than a CNN, its processing time and energy usage can
be further improved. In a typical BNN, the first convolution is not completely
binarized for the sake of accuracy. Consequently, the first layer is the
slowest network stage, requiring a large share of the entire computational
effort. This paper presents a class of BNNs for VPR that combines depthwise
separable factorization and binarization to replace the first convolutional
layer to improve computational and energy efficiency. Our best model achieves
state-of-the-art VPR performance while spending considerably less time and
energy to process an image than a BNN using a non-binary convolution as a first
stage.
Related papers
- Compacting Binary Neural Networks by Sparse Kernel Selection [58.84313343190488]
This paper is motivated by a previously revealed phenomenon that the binary kernels in successful BNNs are nearly power-law distributed.
We develop the Permutation Straight-Through Estimator (PSTE) that is able to not only optimize the selection process end-to-end but also maintain the non-repetitive occupancy of selected codewords.
Experiments verify that our method reduces both the model size and bit-wise computational costs, and achieves accuracy improvements compared with state-of-the-art BNNs under comparable budgets.
arXiv Detail & Related papers (2023-03-25T13:53:02Z) - Basic Binary Convolution Unit for Binarized Image Restoration Network [146.0988597062618]
In this study, we reconsider components in binary convolution, such as residual connection, BatchNorm, activation function, and structure, for image restoration tasks.
Based on our findings and analyses, we design a simple yet efficient basic binary convolution unit (BBCU)
Our BBCU significantly outperforms other BNNs and lightweight models, which shows that BBCU can serve as a basic unit for binarized IR networks.
arXiv Detail & Related papers (2022-10-02T01:54:40Z) - Recurrent Bilinear Optimization for Binary Neural Networks [58.972212365275595]
BNNs neglect the intrinsic bilinear relationship of real-valued weights and scale factors.
Our work is the first attempt to optimize BNNs from the bilinear perspective.
We obtain robust RBONNs, which show impressive performance over state-of-the-art BNNs on various models and datasets.
arXiv Detail & Related papers (2022-09-04T06:45:33Z) - HyBNN and FedHyBNN: (Federated) Hybrid Binary Neural Networks [0.0]
We introduce a novel hybrid neural network architecture, Hybrid Binary Neural Network (HyBNN)
HyBNN consists of a task-independent, general, full-precision variational autoencoder with a binary latent space and a task specific binary neural network.
We show that our proposed system is able to very significantly outperform a vanilla binary neural network with input binarization.
arXiv Detail & Related papers (2022-05-19T20:27:01Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - FracBNN: Accurate and FPGA-Efficient Binary Neural Networks with
Fractional Activations [20.218382369944152]
Binary neural networks (BNNs) have 1-bit weights and activations.
BNNs tend to produce a much lower accuracy on realistic datasets such as ImageNet.
This work proposes FracBNN, which exploits fractional activations to substantially improve the accuracy of BNNs.
arXiv Detail & Related papers (2020-12-22T17:49:30Z) - Binary Neural Networks for Memory-Efficient and Effective Visual Place
Recognition in Changing Environments [24.674034243725455]
Visual place recognition (VPR) is a robot's ability to determine whether a place was visited before using visual data.
CNN-based approaches are unsuitable for resource-constrained platforms, such as small robots and drones.
We propose a new class of highly compact models that drastically reduces the memory requirements and computational effort.
arXiv Detail & Related papers (2020-10-01T22:59:34Z) - Optimizing Memory Placement using Evolutionary Graph Reinforcement
Learning [56.83172249278467]
We introduce Evolutionary Graph Reinforcement Learning (EGRL), a method designed for large search spaces.
We train and validate our approach directly on the Intel NNP-I chip for inference.
We additionally achieve 28-78% speed-up compared to the native NNP-I compiler on all three workloads.
arXiv Detail & Related papers (2020-07-14T18:50:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.