Coarse scale representation of spiking neural networks: backpropagation
through spikes and application to neuromorphic hardware
- URL: http://arxiv.org/abs/2007.06176v1
- Date: Mon, 13 Jul 2020 04:02:35 GMT
- Title: Coarse scale representation of spiking neural networks: backpropagation
through spikes and application to neuromorphic hardware
- Authors: Angel Yanguas-Gil
- Abstract summary: We explore recurrent representations of leaky integrate and fire neurons operating at a timescale equal to their absolute refractory period.
We find that the recurrent model leads to high classification accuracy using just 4-long spike trains during training.
We also observed a good transfer back to continuous implementations of leaky integrate and fire neurons.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work we explore recurrent representations of leaky integrate and fire
neurons operating at a timescale equal to their absolute refractory period. Our
coarse time scale approximation is obtained using a probability distribution
function for spike arrivals that is homogeneously distributed over this time
interval. This leads to a discrete representation that exhibits the same
dynamics as the continuous model, enabling efficient large scale simulations
and backpropagation through the recurrent implementation. We use this approach
to explore the training of deep spiking neural networks including
convolutional, all-to-all connectivity, and maxpool layers directly in Pytorch.
We found that the recurrent model leads to high classification accuracy using
just 4-long spike trains during training. We also observed a good transfer back
to continuous implementations of leaky integrate and fire neurons. Finally, we
applied this approach to some of the standard control problems as a first step
to explore reinforcement learning using neuromorphic chips.
Related papers
- Gradient-free training of recurrent neural networks [3.272216546040443]
We introduce a computational approach to construct all weights and biases of a recurrent neural network without using gradient-based methods.
The approach is based on a combination of random feature networks and Koopman operator theory for dynamical systems.
In computational experiments on time series, forecasting for chaotic dynamical systems, and control problems, we observe that the training time and forecasting accuracy of the recurrent neural networks we construct are improved.
arXiv Detail & Related papers (2024-10-30T21:24:34Z) - Advancing Spatio-Temporal Processing in Spiking Neural Networks through Adaptation [6.233189707488025]
In this article, we analyze the dynamical, computational, and learning properties of adaptive LIF neurons and networks thereof.
We show that the superiority of networks of adaptive LIF neurons extends to the prediction and generation of complex time series.
arXiv Detail & Related papers (2024-08-14T12:49:58Z) - Deep Multi-Threshold Spiking-UNet for Image Processing [51.88730892920031]
This paper introduces the novel concept of Spiking-UNet for image processing, which combines the power of Spiking Neural Networks (SNNs) with the U-Net architecture.
To achieve an efficient Spiking-UNet, we face two primary challenges: ensuring high-fidelity information propagation through the network via spikes and formulating an effective training strategy.
Experimental results show that, on image segmentation and denoising, our Spiking-UNet achieves comparable performance to its non-spiking counterpart.
arXiv Detail & Related papers (2023-07-20T16:00:19Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - An advanced spatio-temporal convolutional recurrent neural network for
storm surge predictions [73.4962254843935]
We study the capability of artificial neural network models to emulate storm surge based on the storm track/size/intensity history.
This study presents a neural network model that can predict storm surge, informed by a database of synthetic storm simulations.
arXiv Detail & Related papers (2022-04-18T23:42:18Z) - Stochastic Recurrent Neural Network for Multistep Time Series
Forecasting [0.0]
We leverage advances in deep generative models and the concept of state space models to propose an adaptation of the recurrent neural network for time series forecasting.
Our model preserves the architectural workings of a recurrent neural network for which all relevant information is encapsulated in its hidden states, and this flexibility allows our model to be easily integrated into any deep architecture for sequential modelling.
arXiv Detail & Related papers (2021-04-26T01:43:43Z) - A Bayesian Perspective on Training Speed and Model Selection [51.15664724311443]
We show that a measure of a model's training speed can be used to estimate its marginal likelihood.
We verify our results in model selection tasks for linear models and for the infinite-width limit of deep neural networks.
Our results suggest a promising new direction towards explaining why neural networks trained with gradient descent are biased towards functions that generalize well.
arXiv Detail & Related papers (2020-10-27T17:56:14Z) - Event-Based Backpropagation can compute Exact Gradients for Spiking
Neural Networks [0.0]
Spiking neural networks combine analog computation with event-based communication using discrete spikes.
For the first time, this work derives the backpropagation algorithm for a continuous-time spiking neural network and a general loss function.
We use gradients computed via EventProp to train networks on the Yin-Yang and MNIST datasets using either a spike time or voltage based loss function and report competitive performance.
arXiv Detail & Related papers (2020-09-17T15:45:00Z) - Supervised Learning in Temporally-Coded Spiking Neural Networks with
Approximate Backpropagation [0.021506382989223777]
We propose a new supervised learning method for temporally-encoded multilayer spiking networks to perform classification.
The method employs a reinforcement signal that mimics backpropagation but is far less computationally intensive.
In simulated MNIST handwritten digit classification, two-layer networks trained with this rule matched the performance of a comparable backpropagation based non-spiking network.
arXiv Detail & Related papers (2020-07-27T03:39:49Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.