Exploring the Connection Between Binary and Spiking Neural Networks
- URL: http://arxiv.org/abs/2002.10064v3
- Date: Thu, 21 May 2020 21:53:42 GMT
- Title: Exploring the Connection Between Binary and Spiking Neural Networks
- Authors: Sen Lu, Abhronil Sengupta
- Abstract summary: We bridge the recent algorithmic progress in training Binary Neural Networks and Spiking Neural Networks.
We show that training Spiking Neural Networks in the extreme quantization regime results in near full precision accuracies on large-scale datasets.
- Score: 1.329054857829016
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: On-chip edge intelligence has necessitated the exploration of algorithmic
techniques to reduce the compute requirements of current machine learning
frameworks. This work aims to bridge the recent algorithmic progress in
training Binary Neural Networks and Spiking Neural Networks - both of which are
driven by the same motivation and yet synergies between the two have not been
fully explored. We show that training Spiking Neural Networks in the extreme
quantization regime results in near full precision accuracies on large-scale
datasets like CIFAR-$100$ and ImageNet. An important implication of this work
is that Binary Spiking Neural Networks can be enabled by "In-Memory" hardware
accelerators catered for Binary Neural Networks without suffering any accuracy
degradation due to binarization. We utilize standard training techniques for
non-spiking networks to generate our spiking networks by conversion process and
also perform an extensive empirical analysis and explore simple design-time and
run-time optimization techniques for reducing inference latency of spiking
networks (both for binary and full-precision models) by an order of magnitude
over prior work.
Related papers
- Join the High Accuracy Club on ImageNet with A Binary Neural Network
Ticket [10.552465253379134]
We focus on a problem: how can a binary neural network achieve the crucial accuracy level (e.g., 80%) on ILSVRC-2012 ImageNet?
We design a novel binary architecture BNext based on a comprehensive study of binary architectures and their optimization process.
We propose a novel knowledge-distillation technique to alleviate the counter-intuitive overfitting problem observed when attempting to train extremely accurate binary models.
arXiv Detail & Related papers (2022-11-23T13:08:58Z) - A Faster Approach to Spiking Deep Convolutional Neural Networks [0.0]
Spiking neural networks (SNNs) have closer dynamics to the brain than current deep neural networks.
We propose a network structure based on previous work to improve network runtime and accuracy.
arXiv Detail & Related papers (2022-10-31T16:13:15Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - High-Capacity Expert Binary Networks [56.87581500474093]
Network binarization is a promising hardware-aware direction for creating efficient deep models.
Despite its memory and computational advantages, reducing the accuracy gap between binary models and their real-valued counterparts remains an unsolved challenging research problem.
We propose Expert Binary Convolution, which, for the first time, tailors conditional computing to binary networks by learning to select one data-specific expert binary filter at a time conditioned on input features.
arXiv Detail & Related papers (2020-10-07T17:58:10Z) - Controlling Information Capacity of Binary Neural Network [21.06914471328105]
We present a method for training binary networks that maintains a stable predefined level of their information capacity throughout the training process.
The results of experiments conducted on SVHN, CIFAR and ImageNet datasets demonstrate that the proposed approach can statistically significantly improve the accuracy of binary networks.
arXiv Detail & Related papers (2020-08-04T10:08:28Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Binarizing MobileNet via Evolution-based Searching [66.94247681870125]
We propose a use of evolutionary search to facilitate the construction and training scheme when binarizing MobileNet.
Inspired by one-shot architecture search frameworks, we manipulate the idea of group convolution to design efficient 1-Bit Convolutional Neural Networks (CNNs)
Our objective is to come up with a tiny yet efficient binary neural architecture by exploring the best candidates of the group convolution.
arXiv Detail & Related papers (2020-05-13T13:25:51Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z) - Binary Neural Networks: A Survey [126.67799882857656]
The binary neural network serves as a promising technique for deploying deep models on resource-limited devices.
The binarization inevitably causes severe information loss, and even worse, its discontinuity brings difficulty to the optimization of the deep network.
We present a survey of these algorithms, mainly categorized into the native solutions directly conducting binarization, and the optimized ones using techniques like minimizing the quantization error, improving the network loss function, and reducing the gradient error.
arXiv Detail & Related papers (2020-03-31T16:47:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.