BiSNN: Training Spiking Neural Networks with Binary Weights via Bayesian
Learning
- URL: http://arxiv.org/abs/2012.08300v1
- Date: Tue, 15 Dec 2020 14:06:36 GMT
- Title: BiSNN: Training Spiking Neural Networks with Binary Weights via Bayesian
Learning
- Authors: Hyeryung Jang and Nicolas Skatchkovsky and Osvaldo Simeone
- Abstract summary: Spiking Neural Networks (SNNs) are biologically inspired, dynamic, event-driven models that enhance energy efficiency.
An SNN model is introduced that combines the benefits of temporally sparse binary activations and of binary weights.
Experiments validate the performance loss with respect to full-precision implementations.
- Score: 37.376989855065545
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial Neural Network (ANN)-based inference on battery-powered devices
can be made more energy-efficient by restricting the synaptic weights to be
binary, hence eliminating the need to perform multiplications. An alternative,
emerging, approach relies on the use of Spiking Neural Networks (SNNs),
biologically inspired, dynamic, event-driven models that enhance energy
efficiency via the use of binary, sparse, activations. In this paper, an SNN
model is introduced that combines the benefits of temporally sparse binary
activations and of binary weights. Two learning rules are derived, the first
based on the combination of straight-through and surrogate gradient techniques,
and the second based on a Bayesian paradigm. Experiments validate the
performance loss with respect to full-precision implementations, and
demonstrate the advantage of the Bayesian paradigm in terms of accuracy and
calibration.
Related papers
- On-Device Learning with Binary Neural Networks [2.7040098749051635]
We propose a CL solution that embraces the recent advancements in CL field and the efficiency of the Binary Neural Networks (BNN)
The choice of a binary network as backbone is essential to meet the constraints of low power devices.
arXiv Detail & Related papers (2023-08-29T13:48:35Z) - Compacting Binary Neural Networks by Sparse Kernel Selection [58.84313343190488]
This paper is motivated by a previously revealed phenomenon that the binary kernels in successful BNNs are nearly power-law distributed.
We develop the Permutation Straight-Through Estimator (PSTE) that is able to not only optimize the selection process end-to-end but also maintain the non-repetitive occupancy of selected codewords.
Experiments verify that our method reduces both the model size and bit-wise computational costs, and achieves accuracy improvements compared with state-of-the-art BNNs under comparable budgets.
arXiv Detail & Related papers (2023-03-25T13:53:02Z) - Recurrent Bilinear Optimization for Binary Neural Networks [58.972212365275595]
BNNs neglect the intrinsic bilinear relationship of real-valued weights and scale factors.
Our work is the first attempt to optimize BNNs from the bilinear perspective.
We obtain robust RBONNs, which show impressive performance over state-of-the-art BNNs on various models and datasets.
arXiv Detail & Related papers (2022-09-04T06:45:33Z) - Network Binarization via Contrastive Learning [16.274341164897827]
We establish a novel contrastive learning framework while training Binary Neural Networks (BNNs)
MI is introduced as the metric to measure the information shared between binary and FP activations.
Results show that our method can be implemented as a pile-up module on existing state-of-the-art binarization methods.
arXiv Detail & Related papers (2022-07-06T21:04:53Z) - Bimodal Distributed Binarized Neural Networks [3.0778860202909657]
Binarization techniques, however, suffer from ineligible performance degradation compared to their full-precision counterparts.
We propose a Bi-Modal Distributed binarization method (methodname)
That imposes bi-modal distribution of the network weights by kurtosis regularization.
arXiv Detail & Related papers (2022-04-05T06:07:05Z) - Spatial-Temporal-Fusion BNN: Variational Bayesian Feature Layer [77.78479877473899]
We design a spatial-temporal-fusion BNN for efficiently scaling BNNs to large models.
Compared to vanilla BNNs, our approach can greatly reduce the training time and the number of parameters, which contributes to scale BNNs efficiently.
arXiv Detail & Related papers (2021-12-12T17:13:14Z) - Improving Accuracy of Binary Neural Networks using Unbalanced Activation
Distribution [12.46127622357824]
We show that unbalanced activation distribution can actually improve the accuracy of BNNs.
We also show that adjusting the threshold values of binary activation functions results in the unbalanced distribution of the binary activation.
Experimental results show that the accuracy of previous BNN models can be improved by simply shifting the threshold values of binary activation functions.
arXiv Detail & Related papers (2020-12-02T02:49:53Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.