Addressing the speed-accuracy simulation trade-off for adaptive spiking
neurons
- URL: http://arxiv.org/abs/2311.11390v1
- Date: Sun, 19 Nov 2023 18:21:45 GMT
- Title: Addressing the speed-accuracy simulation trade-off for adaptive spiking
neurons
- Authors: Luke Taylor, Andrew J King, Nicol S Harper
- Abstract summary: We present an algorithmically reinterpreted the adaptive integrate-and-fire (ALIF) model.
We obtain over a $50times$ training speedup using small DTs on synthetic benchmarks.
We also showcase how our model makes it possible to quickly and accurately fit real electrophysiological recordings of cortical neurons.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The adaptive leaky integrate-and-fire (ALIF) model is fundamental within
computational neuroscience and has been instrumental in studying our brains
$\textit{in silico}$. Due to the sequential nature of simulating these neural
models, a commonly faced issue is the speed-accuracy trade-off: either
accurately simulate a neuron using a small discretisation time-step (DT), which
is slow, or more quickly simulate a neuron using a larger DT and incur a loss
in simulation accuracy. Here we provide a solution to this dilemma, by
algorithmically reinterpreting the ALIF model, reducing the sequential
simulation complexity and permitting a more efficient parallelisation on GPUs.
We computationally validate our implementation to obtain over a $50\times$
training speedup using small DTs on synthetic benchmarks. We also obtained a
comparable performance to the standard ALIF implementation on different
supervised classification tasks - yet in a fraction of the training time.
Lastly, we showcase how our model makes it possible to quickly and accurately
fit real electrophysiological recordings of cortical neurons, where very fine
sub-millisecond DTs are crucial for capturing exact spike timing.
Related papers
- Model calibration using a parallel differential evolution algorithm in computational neuroscience: simulation of stretch induced nerve deficit [1.1026741683718058]
We use a coupled mechanical electrophysiological model with several free parameters that are required to be calibrated against experimental results.
The calibration is carried out by means of an evolutionary algorithm (differential evolution, DE) that needs to evaluate each configuration of parameters on six different damage cases.
We have developed a parallel implementation based on OpenMP that runs on a multi-processor taking advantage of all the available computational power.
arXiv Detail & Related papers (2024-09-19T08:40:32Z) - SparseProp: Efficient Event-Based Simulation and Training of Sparse
Recurrent Spiking Neural Networks [4.532517021515834]
Spiking Neural Networks (SNNs) are biologically-inspired models that are capable of processing information in streams of action potentials.
We introduce SparseProp, a novel event-based algorithm for simulating and training sparse SNNs.
arXiv Detail & Related papers (2023-12-28T18:48:10Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - A Stable, Fast, and Fully Automatic Learning Algorithm for Predictive
Coding Networks [65.34977803841007]
Predictive coding networks are neuroscience-inspired models with roots in both Bayesian statistics and neuroscience.
We show how by simply changing the temporal scheduling of the update rule for the synaptic weights leads to an algorithm that is much more efficient and stable than the original one.
arXiv Detail & Related papers (2022-11-16T00:11:04Z) - SpikiLi: A Spiking Simulation of LiDAR based Real-time Object Detection
for Autonomous Driving [0.0]
Spiking Neural Networks are a new neural network design approach that promises tremendous improvements in power efficiency, computation efficiency, and processing latency.
We first illustrate the applicability of spiking neural networks to a complex deep learning task namely Lidar based 3D object detection for automated driving.
arXiv Detail & Related papers (2022-06-06T20:05:17Z) - An advanced spatio-temporal convolutional recurrent neural network for
storm surge predictions [73.4962254843935]
We study the capability of artificial neural network models to emulate storm surge based on the storm track/size/intensity history.
This study presents a neural network model that can predict storm surge, informed by a database of synthetic storm simulations.
arXiv Detail & Related papers (2022-04-18T23:42:18Z) - Efficient Neuromorphic Signal Processing with Loihi 2 [6.32784133039548]
We show how Resonate-and-Firetemporal (RF) neurons can be used to compute the Short Time Fourier Transform (STFT) with similar computational complexity but 47x less output bandwidth than the conventional STFT.
We also demonstrate promising preliminary results using backpropagation to train RF neurons for audio classification tasks.
arXiv Detail & Related papers (2021-11-05T22:37:05Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - Neuromorphic Algorithm-hardware Codesign for Temporal Pattern Learning [11.781094547718595]
We derive an efficient training algorithm for Leaky Integrate and Fire neurons, which is capable of training a SNN to learn complex spatial temporal patterns.
We have developed a CMOS circuit implementation for a memristor-based network of neuron and synapses which retains critical neural dynamics with reduced complexity.
arXiv Detail & Related papers (2021-04-21T18:23:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.