Runtime Construction of Large-Scale Spiking Neuronal Network Models on
GPU Devices
- URL: http://arxiv.org/abs/2306.09855v1
- Date: Fri, 16 Jun 2023 14:08:27 GMT
- Title: Runtime Construction of Large-Scale Spiking Neuronal Network Models on
GPU Devices
- Authors: Bruno Golosio, Jose Villamar, Gianmarco Tiddia, Elena Pastorelli,
Jonas Stapmanns, Viviana Fanti, Pier Stanislao Paolucci, Abigail Morrison and
Johanna Senk
- Abstract summary: We propose a new method for creating network connections interactively, dynamically, and directly in GPU memory.
We validate the simulation performance with both consumer and data center GPUs on two neuroscientifically relevant models.
Both network construction and simulation times are comparable or shorter than those obtained with other state-of-the-art simulation technologies.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Simulation speed matters for neuroscientific research: this includes not only
how quickly the simulated model time of a large-scale spiking neuronal network
progresses, but also how long it takes to instantiate the network model in
computer memory. On the hardware side, acceleration via highly parallel GPUs is
being increasingly utilized. On the software side, code generation approaches
ensure highly optimized code, at the expense of repeated code regeneration and
recompilation after modifications to the network model. Aiming for a greater
flexibility with respect to iterative model changes, here we propose a new
method for creating network connections interactively, dynamically, and
directly in GPU memory through a set of commonly used high-level connection
rules. We validate the simulation performance with both consumer and data
center GPUs on two neuroscientifically relevant models: a cortical microcircuit
of about 77,000 leaky-integrate-and-fire neuron models and 300 million static
synapses, and a two-population network recurrently connected using a variety of
connection rules. With our proposed ad hoc network instantiation, both network
construction and simulation times are comparable or shorter than those obtained
with other state-of-the-art simulation technologies, while still meeting the
flexibility demands of explorative network modeling.
Related papers
- SpikiLi: A Spiking Simulation of LiDAR based Real-time Object Detection
for Autonomous Driving [0.0]
Spiking Neural Networks are a new neural network design approach that promises tremendous improvements in power efficiency, computation efficiency, and processing latency.
We first illustrate the applicability of spiking neural networks to a complex deep learning task namely Lidar based 3D object detection for automated driving.
arXiv Detail & Related papers (2022-06-06T20:05:17Z) - A High Throughput Generative Vector Autoregression Model for Stochastic
Synapses [0.0]
We develop a high throughput generative model for synaptic arrays based on electrical measurement data for resistive memory cells.
We demonstrate array sizes above one billion cells and throughputs exceeding one hundred million weight updates per second, above the pixel rate of a 30 frames/s 4K video stream.
arXiv Detail & Related papers (2022-05-10T17:08:30Z) - An advanced spatio-temporal convolutional recurrent neural network for
storm surge predictions [73.4962254843935]
We study the capability of artificial neural network models to emulate storm surge based on the storm track/size/intensity history.
This study presents a neural network model that can predict storm surge, informed by a database of synthetic storm simulations.
arXiv Detail & Related papers (2022-04-18T23:42:18Z) - Real-time Neural-MPC: Deep Learning Model Predictive Control for
Quadrotors and Agile Robotic Platforms [59.03426963238452]
We present Real-time Neural MPC, a framework to efficiently integrate large, complex neural network architectures as dynamics models within a model-predictive control pipeline.
We show the feasibility of our framework on real-world problems by reducing the positional tracking error by up to 82% when compared to state-of-the-art MPC approaches without neural network dynamics.
arXiv Detail & Related papers (2022-03-15T09:38:15Z) - Simulating Network Paths with Recurrent Buffering Units [4.7590500506853415]
We seek a model that generates end-to-end packet delay values in response to the time-varying load offered by a sender.
We propose a novel grey-box approach to network simulation that embeds the semantics of physical network path in a new RNN-style architecture called Recurrent Buffering Unit.
arXiv Detail & Related papers (2022-02-23T16:46:31Z) - Parallel Simulation of Quantum Networks with Distributed Quantum State
Management [56.24769206561207]
We identify requirements for parallel simulation of quantum networks and develop the first parallel discrete event quantum network simulator.
Our contributions include the design and development of a quantum state manager that maintains shared quantum information distributed across multiple processes.
We release the parallel SeQUeNCe simulator as an open-source tool alongside the existing sequential version.
arXiv Detail & Related papers (2021-11-06T16:51:17Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - Reservoir Memory Machines as Neural Computers [70.5993855765376]
Differentiable neural computers extend artificial neural networks with an explicit memory without interference.
We achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently.
arXiv Detail & Related papers (2020-09-14T12:01:30Z) - Adaptive Neural Network-Based Approximation to Accelerate Eulerian Fluid
Simulation [9.576796509480445]
We introduce Smartfluidnet, a framework that automates model generation and application.
Smartfluidnet generates multiple neural networks before the simulation to meet the execution time and simulation quality requirement.
We show that Smartfluidnet achieves 1.46x and 590x speedup compared with a state-of-the-art neural network model and the original fluid simulation respectively.
arXiv Detail & Related papers (2020-08-26T21:44:44Z) - Fast simulations of highly-connected spiking cortical models using GPUs [0.0]
We present a library for large-scale simulations of spiking neural network models written in the C++ programming languages.
We will show that the proposed library achieves state-of-the-art performance in terms of simulation time per second of biological activity.
arXiv Detail & Related papers (2020-07-28T13:58:50Z) - One-step regression and classification with crosspoint resistive memory
arrays [62.997667081978825]
High speed, low energy computing machines are in demand to enable real-time artificial intelligence at the edge.
One-step learning is supported by simulations of the prediction of the cost of a house in Boston and the training of a 2-layer neural network for MNIST digit recognition.
Results are all obtained in one computational step, thanks to the physical, parallel, and analog computing within the crosspoint array.
arXiv Detail & Related papers (2020-05-05T08:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.