2D versus 3D Convolutional Spiking Neural Networks Trained with
Unsupervised STDP for Human Action Recognition
- URL: http://arxiv.org/abs/2205.13474v1
- Date: Thu, 26 May 2022 16:34:22 GMT
- Title: 2D versus 3D Convolutional Spiking Neural Networks Trained with
Unsupervised STDP for Human Action Recognition
- Authors: Mireille El-Assal, Pierre Tirilly, Ioan Marius Bilasco
- Abstract summary: Spiking neural networks (SNNs) are third generation biologically plausible models that process the information in the form of spikes.
Unsupervised learning with SNNs using the spike timing dependent plasticity (STDP) rule has the potential to overcome some bottlenecks.
We show that STDP-based convolutional SNNs can learn motion patterns using 3D kernels, thus enabling motion-based recognition from videos.
- Score: 1.9981375888949475
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current advances in technology have highlighted the importance of video
analysis in the domain of computer vision. However, video analysis has
considerably high computational costs with traditional artificial neural
networks (ANNs). Spiking neural networks (SNNs) are third generation
biologically plausible models that process the information in the form of
spikes. Unsupervised learning with SNNs using the spike timing dependent
plasticity (STDP) rule has the potential to overcome some bottlenecks of
regular artificial neural networks, but STDP-based SNNs are still immature and
their performance is far behind that of ANNs. In this work, we study the
performance of SNNs when challenged with the task of human action recognition,
because this task has many real-time applications in computer vision, such as
video surveillance. In this paper we introduce a multi-layered 3D convolutional
SNN model trained with unsupervised STDP. We compare the performance of this
model to those of a 2D STDP-based SNN when challenged with the KTH and Weizmann
datasets. We also compare single-layer and multi-layer versions of these models
in order to get an accurate assessment of their performance. We show that
STDP-based convolutional SNNs can learn motion patterns using 3D kernels, thus
enabling motion-based recognition from videos. Finally, we give evidence that
3D convolution is superior to 2D convolution with STDP-based SNNs, especially
when dealing with long video sequences.
Related papers
- Fully Spiking Actor Network with Intra-layer Connections for
Reinforcement Learning [51.386945803485084]
We focus on the task where the agent needs to learn multi-dimensional deterministic policies to control.
Most existing spike-based RL methods take the firing rate as the output of SNNs, and convert it to represent continuous action space (i.e., the deterministic policy) through a fully-connected layer.
To develop a fully spiking actor network without any floating-point matrix operations, we draw inspiration from the non-spiking interneurons found in insects.
arXiv Detail & Related papers (2024-01-09T07:31:34Z) - S3TC: Spiking Separated Spatial and Temporal Convolutions with
Unsupervised STDP-based Learning for Action Recognition [1.2123876307427106]
Spiking Neural Networks (SNNs) have significantly lower computational costs (thousands of times) than regular non-spiking networks when implemented on neuromorphic hardware.
We introduce, for the first time, Spiking Separated Spatial and Temporal Convolutions (S3TCs) for the sake of reducing the number of parameters required for video analysis.
arXiv Detail & Related papers (2023-09-22T10:05:35Z) - Spiking Two-Stream Methods with Unsupervised STDP-based Learning for
Action Recognition [1.9981375888949475]
Deep Convolutional Neural Networks (CNNs) are currently the state-of-the-art methods for video analysis.
We use Convolutional Spiking Neural Networks (CSNNs) trained with the unsupervised Spike Timing-Dependent Plasticity (STDP) rule for action classification.
We show that two-stream CSNNs can successfully extract information from videos despite using limited training data.
arXiv Detail & Related papers (2023-06-23T20:54:44Z) - An Unsupervised STDP-based Spiking Neural Network Inspired By
Biologically Plausible Learning Rules and Connections [10.188771327458651]
Spike-timing-dependent plasticity (STDP) is a general learning rule in the brain, but spiking neural networks (SNNs) trained with STDP alone is inefficient and perform poorly.
We design an adaptive synaptic filter and introduce the adaptive spiking threshold to enrich the representation ability of SNNs.
Our model achieves the current state-of-the-art performance of unsupervised STDP-based SNNs in the MNIST and FashionMNIST datasets.
arXiv Detail & Related papers (2022-07-06T14:53:32Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Continuous Emotion Recognition with Spatiotemporal Convolutional Neural
Networks [82.54695985117783]
We investigate the suitability of state-of-the-art deep learning architectures for continuous emotion recognition using long video sequences captured in-the-wild.
We have developed and evaluated convolutional recurrent neural networks combining 2D-CNNs and long short term-memory units, and inflated 3D-CNN models, which are built by inflating the weights of a pre-trained 2D-CNN model during fine-tuning.
arXiv Detail & Related papers (2020-11-18T13:42:05Z) - Spiking Neural Networks -- Part I: Detecting Spatial Patterns [38.518936229794214]
Spiking Neural Networks (SNNs) are biologically inspired machine learning models that build on dynamic neuronal models processing binary and sparse spiking signals in an event-driven, online, fashion.
SNNs can be implemented on neuromorphic computing platforms that are emerging as energy-efficient co-processors for learning and inference.
arXiv Detail & Related papers (2020-10-27T11:37:22Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - An Efficient Spiking Neural Network for Recognizing Gestures with a DVS
Camera on the Loihi Neuromorphic Processor [12.118084418840152]
Spiking Neural Networks (SNNs) have come under the spotlight for machine learning based applications.
We show our methodology for the design of an SNN that achieves nearly the same accuracy results as its corresponding Deep Neural Networks (DNNs)
Our SNN achieves 89.64% classification accuracy and occupies only 37 Loihi cores.
arXiv Detail & Related papers (2020-05-16T17:00:10Z) - Event-Based Angular Velocity Regression with Spiking Networks [51.145071093099396]
Spiking Neural Networks (SNNs) process information conveyed as temporal spikes rather than numeric values.
We propose, for the first time, a temporal regression problem of numerical values given events from an event camera.
We show that we can successfully train an SNN to perform angular velocity regression.
arXiv Detail & Related papers (2020-03-05T17:37:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.