Rate Coding or Direct Coding: Which One is Better for Accurate, Robust,
and Energy-efficient Spiking Neural Networks?
- URL: http://arxiv.org/abs/2202.03133v1
- Date: Mon, 31 Jan 2022 16:18:07 GMT
- Title: Rate Coding or Direct Coding: Which One is Better for Accurate, Robust,
and Energy-efficient Spiking Neural Networks?
- Authors: Youngeun Kim, Hyoungseob Park, Abhishek Moitra, Abhiroop
Bhattacharjee, Yeshwanth Venkatesha, Priyadarshini Panda
- Abstract summary: Spiking Neural Networks (SNNs) works focus on an image classification task, therefore various coding techniques have been proposed to convert an image into temporal binary spikes.
Among them, rate coding and direct coding are regarded as prospective candidates for building a practical SNN system.
We conduct a comprehensive analysis of the two codings from three perspectives: accuracy, adversarial robustness, and energy-efficiency.
- Score: 4.872468969809081
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent Spiking Neural Networks (SNNs) works focus on an image classification
task, therefore various coding techniques have been proposed to convert an
image into temporal binary spikes. Among them, rate coding and direct coding
are regarded as prospective candidates for building a practical SNN system as
they show state-of-the-art performance on large-scale datasets. Despite their
usage, there is little attention to comparing these two coding schemes in a
fair manner. In this paper, we conduct a comprehensive analysis of the two
codings from three perspectives: accuracy, adversarial robustness, and
energy-efficiency. First, we compare the performance of two coding techniques
with various architectures and datasets. Then, we measure the robustness of the
coding techniques on two adversarial attack methods. Finally, we compare the
energy-efficiency of two coding schemes on a digital hardware platform. Our
results show that direct coding can achieve better accuracy especially for a
small number of timesteps. In contrast, rate coding shows better robustness to
adversarial attacks owing to the non-differentiable spike generation process.
Rate coding also yields higher energy-efficiency than direct coding which
requires multi-bit precision for the first layer. Our study explores the
characteristics of two codings, which is an important design consideration for
building SNNs. The code is made available at
https://github.com/Intelligent-Computing-Lab-Yale/Rate-vs-Direct.
Related papers
- Learning Linear Block Error Correction Codes [62.25533750469467]
We propose for the first time a unified encoder-decoder training of binary linear block codes.
We also propose a novel Transformer model in which the self-attention masking is performed in a differentiable fashion for the efficient backpropagation of the code gradient.
arXiv Detail & Related papers (2024-05-07T06:47:12Z) - Coding for Gaussian Two-Way Channels: Linear and Learning-Based
Approaches [28.98777190628006]
We propose two different two-way coding strategies: linear coding and learning-based coding.
For learning-based coding, we introduce a novel recurrent neural network (RNN)-based coding architecture.
Our two-way coding methodologies outperform conventional channel coding schemes significantly in sum-error performance.
arXiv Detail & Related papers (2023-12-31T12:40:18Z) - Tensor Network Decoding Beyond 2D [2.048226951354646]
We introduce several techniques to generalize tensor network decoding to higher dimensions.
We numerically demonstrate that the decoding accuracy of our approach outperforms state-of-the-art decoders on the 3D surface code.
arXiv Detail & Related papers (2023-10-16T18:00:02Z) - Efficient spike encoding algorithms for neuromorphic speech recognition [5.182266520875928]
Spiking Neural Networks (SNN) are very effective for neuromorphic processor implementations.
Real-valued signals are encoded as real-valued signals that are not well-suited to SNN.
In this paper, we study four spike encoding methods in the context of a speaker independent digit classification system.
arXiv Detail & Related papers (2022-07-14T17:22:07Z) - LoopITR: Combining Dual and Cross Encoder Architectures for Image-Text
Retrieval [117.15862403330121]
We propose LoopITR, which combines dual encoders and cross encoders in the same network for joint learning.
Specifically, we let the dual encoder provide hard negatives to the cross encoder, and use the more discriminative cross encoder to distill its predictions back to the dual encoder.
arXiv Detail & Related papers (2022-03-10T16:41:12Z) - Adversarial Neural Networks for Error Correcting Codes [76.70040964453638]
We introduce a general framework to boost the performance and applicability of machine learning (ML) models.
We propose to combine ML decoders with a competing discriminator network that tries to distinguish between codewords and noisy words.
Our framework is game-theoretic, motivated by generative adversarial networks (GANs)
arXiv Detail & Related papers (2021-12-21T19:14:44Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - FATNN: Fast and Accurate Ternary Neural Networks [89.07796377047619]
Ternary Neural Networks (TNNs) have received much attention due to being potentially orders of magnitude faster in inference, as well as more power efficient, than full-precision counterparts.
In this work, we show that, under some mild constraints, computational complexity of the ternary inner product can be reduced by a factor of 2.
We elaborately design an implementation-dependent ternary quantization algorithm to mitigate the performance gap.
arXiv Detail & Related papers (2020-08-12T04:26:18Z) - Auto-Encoding Twin-Bottleneck Hashing [141.5378966676885]
This paper proposes an efficient and adaptive code-driven graph.
It is updated by decoding in the context of an auto-encoder.
Experiments on benchmarked datasets clearly show the superiority of our framework over the state-of-the-art hashing methods.
arXiv Detail & Related papers (2020-02-27T05:58:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.