Comparing SNNs and RNNs on Neuromorphic Vision Datasets: Similarities
and Differences
- URL: http://arxiv.org/abs/2005.02183v1
- Date: Sat, 2 May 2020 10:19:37 GMT
- Title: Comparing SNNs and RNNs on Neuromorphic Vision Datasets: Similarities
and Differences
- Authors: Weihua He, YuJie Wu, Lei Deng, Guoqi Li, Haoyu Wang, Yang Tian, Wei
Ding, Wenhui Wang, Yuan Xie
- Abstract summary: Spiking neural networks (SNNs) and recurrent neural networks (RNNs) are benchmarked on neuromorphic data.
In this work, we make a systematic study to compare SNNs and RNNs on neuromorphic data.
- Score: 36.82069150045153
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neuromorphic data, recording frameless spike events, have attracted
considerable attention for the spatiotemporal information components and the
event-driven processing fashion. Spiking neural networks (SNNs) represent a
family of event-driven models with spatiotemporal dynamics for neuromorphic
computing, which are widely benchmarked on neuromorphic data. Interestingly,
researchers in the machine learning community can argue that recurrent
(artificial) neural networks (RNNs) also have the capability to extract
spatiotemporal features although they are not event-driven. Thus, the question
of "what will happen if we benchmark these two kinds of models together on
neuromorphic data" comes out but remains unclear. In this work, we make a
systematic study to compare SNNs and RNNs on neuromorphic data, taking the
vision datasets as a case study. First, we identify the similarities and
differences between SNNs and RNNs (including the vanilla RNNs and LSTM) from
the modeling and learning perspectives. To improve comparability and fairness,
we unify the supervised learning algorithm based on backpropagation through
time (BPTT), the loss function exploiting the outputs at all timesteps, the
network structure with stacked fully-connected or convolutional layers, and the
hyper-parameters during training. Especially, given the mainstream loss
function used in RNNs, we modify it inspired by the rate coding scheme to
approach that of SNNs. Furthermore, we tune the temporal resolution of datasets
to test model robustness and generalization. At last, a series of contrast
experiments are conducted on two types of neuromorphic datasets: DVS-converted
(N-MNIST) and DVS-captured (DVS Gesture).
Related papers
- Enhancing SNN-based Spatio-Temporal Learning: A Benchmark Dataset and Cross-Modality Attention Model [30.66645039322337]
High-quality benchmark datasets are great importance to the advances of Artificial Neural Networks (SNNs)
Yet, the SNN-based cross-modal fusion remains underexplored.
In this work, we present a neuromorphic dataset that can better exploit the inherent-temporal betemporal of SNNs.
arXiv Detail & Related papers (2024-10-21T06:59:04Z) - Towards Low-latency Event-based Visual Recognition with Hybrid Step-wise Distillation Spiking Neural Networks [50.32980443749865]
Spiking neural networks (SNNs) have garnered significant attention for their low power consumption and high biologicalability.
Current SNNs struggle to balance accuracy and latency in neuromorphic datasets.
We propose Step-wise Distillation (HSD) method, tailored for neuromorphic datasets.
arXiv Detail & Related papers (2024-09-19T06:52:34Z) - Digit Recognition using Multimodal Spiking Neural Networks [3.046906600991174]
Spiking neural networks (SNNs) are the third generation of neural networks that are biologically inspired to process data.
SNNs are used to process event-based data due to their neuromorphic nature.
arXiv Detail & Related papers (2024-08-31T22:27:40Z) - Co-learning synaptic delays, weights and adaptation in spiking neural
networks [0.0]
Spiking neural networks (SNN) distinguish themselves from artificial neural networks (ANN) because of their inherent temporal processing and spike-based computations.
We show that data processing with spiking neurons can be enhanced by co-learning the connection weights with two other biologically inspired neuronal features.
arXiv Detail & Related papers (2023-09-12T09:13:26Z) - Transferability of coVariance Neural Networks and Application to
Interpretable Brain Age Prediction using Anatomical Features [119.45320143101381]
Graph convolutional networks (GCN) leverage topology-driven graph convolutional operations to combine information across the graph for inference tasks.
We have studied GCNs with covariance matrices as graphs in the form of coVariance neural networks (VNNs)
VNNs inherit the scale-free data processing architecture from GCNs and here, we show that VNNs exhibit transferability of performance over datasets whose covariance matrices converge to a limit object.
arXiv Detail & Related papers (2023-05-02T22:15:54Z) - Predicting Brain Age using Transferable coVariance Neural Networks [119.45320143101381]
We have recently studied covariance neural networks (VNNs) that operate on sample covariance matrices.
In this paper, we demonstrate the utility of VNNs in inferring brain age using cortical thickness data.
Our results show that VNNs exhibit multi-scale and multi-site transferability for inferring brain age
In the context of brain age in Alzheimer's disease (AD), our experiments show that i) VNN outputs are interpretable as brain age predicted using VNNs is significantly elevated for AD with respect to healthy subjects.
arXiv Detail & Related papers (2022-10-28T18:58:34Z) - Heterogeneous Recurrent Spiking Neural Network for Spatio-Temporal
Classification [13.521272923545409]
Spi Neural Networks are often touted as brain-inspired learning models for the third wave of Artificial Intelligence.
This paper presents a heterogeneous spiking neural network (HRSNN) with unsupervised learning for video recognition tasks.
We show that HRSNN can achieve similar performance to state-of-the-temporal backpropagation trained supervised SNN, but with less computation.
arXiv Detail & Related papers (2022-09-22T16:34:01Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Spiking Neural Networks -- Part II: Detecting Spatio-Temporal Patterns [38.518936229794214]
Spiking Neural Networks (SNNs) have the unique ability to detect information in encoded-temporal signals.
We review models and training algorithms for the dominant approach that considers SNNs as a Recurrent Neural Network (RNN)
We describe an alternative approach that relies on probabilistic models for spiking neurons, allowing the derivation of local learning rules via gradient estimates.
arXiv Detail & Related papers (2020-10-27T11:47:42Z) - Neural Additive Models: Interpretable Machine Learning with Neural Nets [77.66871378302774]
Deep neural networks (DNNs) are powerful black-box predictors that have achieved impressive performance on a wide variety of tasks.
We propose Neural Additive Models (NAMs) which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models.
NAMs learn a linear combination of neural networks that each attend to a single input feature.
arXiv Detail & Related papers (2020-04-29T01:28:32Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.