S$^2$NN: Time Step Reduction of Spiking Surrogate Gradients for Training
Energy Efficient Single-Step Neural Networks
- URL: http://arxiv.org/abs/2201.10879v1
- Date: Wed, 26 Jan 2022 11:31:21 GMT
- Title: S$^2$NN: Time Step Reduction of Spiking Surrogate Gradients for Training
Energy Efficient Single-Step Neural Networks
- Authors: Kazuma Suetake, Shin-ichi Ikegawa, Ryuji Saiin and Yoshihide Sawada
- Abstract summary: We propose a single-step neural network (S$2$NN) with low computational cost and high precision.
The proposed S$2$NN processes the information between hidden layers by spikes as SNNs.
It has no temporal dimension so that there is no latency within training and inference phases as BNNs.
- Score: 0.40145248246551063
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As the scales of neural networks increase, techniques that enable them to run
with low computational cost and energy efficiency are required. From such
demands, various efficient neural network paradigms, such as spiking neural
networks (SNNs) or binary neural networks (BNNs), have been proposed. However,
they have sticky drawbacks, such as degraded inference accuracy and latency. To
solve these problems, we propose a single-step neural network (S$^2$NN), an
energy-efficient neural network with low computational cost and high precision.
The proposed S$^2$NN processes the information between hidden layers by spikes
as SNNs. Nevertheless, it has no temporal dimension so that there is no latency
within training and inference phases as BNNs. Thus, the proposed S$^2$NN has a
lower computational cost than SNNs that require time-series processing.
However, S$^2$NN cannot adopt na\"{i}ve backpropagation algorithms due to the
non-differentiability nature of spikes. We deduce a suitable neuron model by
reducing the surrogate gradient for multi-time step SNNs to a single-time step.
We experimentally demonstrated that the obtained neuron model enables S$^2$NN
to train more accurately and energy-efficiently than existing neuron models for
SNNs and BNNs. We also showed that the proposed S$^2$NN could achieve
comparable accuracy to full-precision networks while being highly
energy-efficient.
Related papers
- NAS-BNN: Neural Architecture Search for Binary Neural Networks [55.058512316210056]
We propose a novel neural architecture search scheme for binary neural networks, named NAS-BNN.
Our discovered binary model family outperforms previous BNNs for a wide range of operations (OPs) from 20M to 200M.
In addition, we validate the transferability of these searched BNNs on the object detection task, and our binary detectors with the searched BNNs achieve a novel state-of-the-art result, e.g., 31.6% mAP with 370M OPs, on MS dataset.
arXiv Detail & Related papers (2024-08-28T02:17:58Z) - Accurate Mapping of RNNs on Neuromorphic Hardware with Adaptive Spiking Neurons [2.9410174624086025]
We present a $SigmaDelta$-low-pass RNN (lpRNN) for mapping rate-based RNNs to spiking neural networks (SNNs)
An adaptive spiking neuron model encodes signals using $SigmaDelta$-modulation and enables precise mapping.
We demonstrate the implementation of the lpRNN on Intel's neuromorphic research chip Loihi.
arXiv Detail & Related papers (2024-07-18T14:06:07Z) - High-performance deep spiking neural networks with 0.3 spikes per neuron [9.01407445068455]
It is hard to train biologically-inspired spiking neural networks (SNNs) than artificial neural networks (ANNs)
We show that training deep SNN models achieves the exact same performance as that of ANNs.
Our SNN accomplishes high-performance classification with less than 0.3 spikes per neuron, lending itself for an energy-efficient implementation.
arXiv Detail & Related papers (2023-06-14T21:01:35Z) - tinySNN: Towards Memory- and Energy-Efficient Spiking Neural Networks [14.916996986290902]
Spiking Neural Network (SNN) models are typically favorable as they can offer higher accuracy.
However, employing such models on the resource- and energy-constrained embedded platforms is inefficient.
We present a tinySNN framework that optimize the memory and energy requirements of SNN processing.
arXiv Detail & Related papers (2022-06-17T09:40:40Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Can Deep Neural Networks be Converted to Ultra Low-Latency Spiking
Neural Networks? [3.2108350580418166]
Spiking neural networks (SNNs) operate via binary spikes distributed over time.
SOTA training strategies for SNNs involve conversion from a non-spiking deep neural network (DNN)
We propose a new training algorithm that accurately captures these distributions, minimizing the error between the DNN and converted SNN.
arXiv Detail & Related papers (2021-12-22T18:47:45Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - Sub-bit Neural Networks: Learning to Compress and Accelerate Binary
Neural Networks [72.81092567651395]
Sub-bit Neural Networks (SNNs) are a new type of binary quantization design tailored to compress and accelerate BNNs.
SNNs are trained with a kernel-aware optimization framework, which exploits binary quantization in the fine-grained convolutional kernel space.
Experiments on visual recognition benchmarks and the hardware deployment on FPGA validate the great potentials of SNNs.
arXiv Detail & Related papers (2021-10-18T11:30:29Z) - Optimal Conversion of Conventional Artificial Neural Networks to Spiking
Neural Networks [0.0]
Spiking neural networks (SNNs) are biology-inspired artificial neural networks (ANNs)
We propose a novel strategic pipeline that transfers the weights to the target SNN by combining threshold balance and soft-reset mechanisms.
Our method is promising to get implanted onto embedded platforms with better support of SNNs with limited energy and memory.
arXiv Detail & Related papers (2021-02-28T12:04:22Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - You Only Spike Once: Improving Energy-Efficient Neuromorphic Inference
to ANN-Level Accuracy [51.861168222799186]
Spiking Neural Networks (SNNs) are a type of neuromorphic, or brain-inspired network.
SNNs are sparse, accessing very few weights, and typically only use addition operations instead of the more power-intensive multiply-and-accumulate operations.
In this work, we aim to overcome the limitations of TTFS-encoded neuromorphic systems.
arXiv Detail & Related papers (2020-06-03T15:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.