From "What" to "When" -- a Spiking Neural Network Predicting Rare Events
and Time to their Occurrence
- URL: http://arxiv.org/abs/2311.05210v1
- Date: Thu, 9 Nov 2023 08:47:23 GMT
- Title: From "What" to "When" -- a Spiking Neural Network Predicting Rare Events
and Time to their Occurrence
- Authors: Mikhail Kiselev
- Abstract summary: This research paper presents a novel approach to learning the corresponding predictive model by an SNN consisting of leaky integrate-and-fire (LIF) neurons.
The proposed method leverages specially designed local synaptic plasticity rules and a novel columnar-layered SNN architecture.
It was demonstrated that the SNN described in this paper gives superior prediction accuracy in comparison with precise machine learning techniques.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In the reinforcement learning (RL) tasks, the ability to predict receiving
reward in the near or more distant future means the ability to evaluate the
current state as more or less close to the target state (labelled by the reward
signal). In the present work, we utilize a spiking neural network (SNN) to
predict time to the next target event (reward - in case of RL). In the context
of SNNs, events are represented as spikes emitted by network neurons or input
nodes. It is assumed that target events are indicated by spikes emitted by a
special network input node. Using description of the current state encoded in
the form of spikes from the other input nodes, the network should predict
approximate time of the next target event. This research paper presents a novel
approach to learning the corresponding predictive model by an SNN consisting of
leaky integrate-and-fire (LIF) neurons. The proposed method leverages specially
designed local synaptic plasticity rules and a novel columnar-layered SNN
architecture. Similar to our previous works, this study places a strong
emphasis on the hardware-friendliness of the proposed models, ensuring their
efficient implementation on modern and future neuroprocessors. The approach
proposed was tested on a simple reward prediction task in the context of one of
the RL benchmark ATARI games, ping-pong. It was demonstrated that the SNN
described in this paper gives superior prediction accuracy in comparison with
precise machine learning techniques, such as decision tree algorithms and
convolutional neural networks.
Related papers
- Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Quantum Recurrent Neural Networks for Sequential Learning [11.133759363113867]
We propose a new kind of quantum recurrent neural network (QRNN) to find quantum advantageous applications in the near term.
Our QRNN is built by stacking the QRBs in a staggered way that can greatly reduce the algorithm's requirement with regard to the coherent time of quantum devices.
The numerical experiments show that our QRNN achieves much better performance in prediction (classification) accuracy against the classical RNN and state-of-the-art QNN models for sequential learning.
arXiv Detail & Related papers (2023-02-07T04:04:39Z) - Object Detection with Spiking Neural Networks on Automotive Event Data [0.0]
We propose to train spiking neural networks (SNNs) directly on data coming from event cameras to design fast and efficient automotive embedded applications.
In this paper, we conducted experiments on two automotive event datasets, establishing new state-of-the-art classification results for spiking neural networks.
arXiv Detail & Related papers (2022-05-09T14:39:47Z) - Random Quantum Neural Networks (RQNN) for Noisy Image Recognition [0.9205287316703888]
We introduce a novel class of supervised Random Quantum Neural Networks (RQNNs) with a robust training strategy.
The proposed RQNN employs hybrid classical-quantum algorithms with superposition state and amplitude encoding features.
Experiments on the MNIST, FashionMNIST, and KMNIST datasets demonstrate that the proposed RQNN model achieves an average classification accuracy of $94.9%$.
arXiv Detail & Related papers (2022-03-03T15:15:29Z) - Hybrid SNN-ANN: Energy-Efficient Classification and Object Detection for
Event-Based Vision [64.71260357476602]
Event-based vision sensors encode local pixel-wise brightness changes in streams of events rather than image frames.
Recent progress in object recognition from event-based sensors has come from conversions of deep neural networks.
We propose a hybrid architecture for end-to-end training of deep neural networks for event-based pattern recognition and object detection.
arXiv Detail & Related papers (2021-12-06T23:45:58Z) - Spiking Generative Adversarial Networks With a Neural Network
Discriminator: Local Training, Bayesian Models, and Continual Meta-Learning [31.78005607111787]
Training neural networks to reproduce spiking patterns is a central problem in neuromorphic computing.
This work proposes to train SNNs so as to match spiking signals rather than individual spiking signals.
arXiv Detail & Related papers (2021-11-02T17:20:54Z) - SpikeMS: Deep Spiking Neural Network for Motion Segmentation [7.491944503744111]
textitSpikeMS is the first deep encoder-decoder SNN architecture for the real-world large-scale problem of motion segmentation.
We show that textitSpikeMS is capable of textitincremental predictions, or predictions from smaller amounts of test data than it is trained on.
arXiv Detail & Related papers (2021-05-13T21:34:55Z) - Neural Networks Enhancement with Logical Knowledge [83.9217787335878]
We propose an extension of KENN for relational data.
The results show that KENN is capable of increasing the performances of the underlying neural network even in the presence relational data.
arXiv Detail & Related papers (2020-09-13T21:12:20Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.