A High Throughput Generative Vector Autoregression Model for Stochastic
Synapses
- URL: http://arxiv.org/abs/2205.05053v1
- Date: Tue, 10 May 2022 17:08:30 GMT
- Title: A High Throughput Generative Vector Autoregression Model for Stochastic
Synapses
- Authors: T. Hennen, A. Elias, J. F. Nodin, G. Molas, R. Waser, D. J. Wouters
and D. Bedau
- Abstract summary: We develop a high throughput generative model for synaptic arrays based on electrical measurement data for resistive memory cells.
We demonstrate array sizes above one billion cells and throughputs exceeding one hundred million weight updates per second, above the pixel rate of a 30 frames/s 4K video stream.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: By imitating the synaptic connectivity and plasticity of the brain, emerging
electronic nanodevices offer new opportunities as the building blocks of
neuromorphic systems. One challenge for largescale simulations of computational
architectures based on emerging devices is to accurately capture device
response, hysteresis, noise, and the covariance structure in the temporal
domain as well as between the different device parameters. We address this
challenge with a high throughput generative model for synaptic arrays that is
based on a recently available type of electrical measurement data for resistive
memory cells. We map this real world data onto a vector autoregressive
stochastic process to accurately reproduce the device parameters and their
cross-correlation structure. While closely matching the measured data, our
model is still very fast; we provide parallelized implementations for both CPUs
and GPUs and demonstrate array sizes above one billion cells and throughputs
exceeding one hundred million weight updates per second, above the pixel rate
of a 30 frames/s 4K video stream.
Related papers
- A Realistic Simulation Framework for Analog/Digital Neuromorphic Architectures [73.65190161312555]
ARCANA is a spiking neural network simulator designed to account for the properties of mixed-signal neuromorphic circuits.
We show how the results obtained provide a reliable estimate of the behavior of the spiking neural network trained in software.
arXiv Detail & Related papers (2024-09-23T11:16:46Z) - Synaptogen: A cross-domain generative device model for large-scale neuromorphic circuit design [1.704443882665726]
We present a fast generative modeling approach for resistive memories that reproduces the complex statistical properties of real-world devices.
By training on extensive measurement data of integrated 1T1R arrays, an autoregressive process accurately accounts for the cross-correlations between the parameters.
Benchmarks show that this statistically comprehensive model read/writes throughput exceeds those of even highly simplified and deterministic compact models.
arXiv Detail & Related papers (2024-04-09T14:33:03Z) - Resistive Memory-based Neural Differential Equation Solver for Score-based Diffusion Model [55.116403765330084]
Current AIGC methods, such as score-based diffusion, are still deficient in terms of rapidity and efficiency.
We propose a time-continuous and analog in-memory neural differential equation solver for score-based diffusion.
We experimentally validate our solution with 180 nm resistive memory in-memory computing macros.
arXiv Detail & Related papers (2024-04-08T16:34:35Z) - Pruning random resistive memory for optimizing analogue AI [54.21621702814583]
AI models present unprecedented challenges to energy consumption and environmental sustainability.
One promising solution is to revisit analogue computing, a technique that predates digital computing.
Here, we report a universal solution, software-hardware co-design using structural plasticity-inspired edge pruning.
arXiv Detail & Related papers (2023-11-13T08:59:01Z) - CIMulator: A Comprehensive Simulation Platform for Computing-In-Memory
Circuit Macros with Low Bit-Width and Real Memory Materials [0.5325753548715747]
This paper presents a simulation platform, namely CIMulator, for quantifying the efficacy of various synaptic devices in neuromorphic accelerators.
Non-volatile memory devices, such as resistive random-access memory, ferroelectric field-effect transistor, and volatile static random-access memory devices, can be selected as synaptic devices.
A multilayer perceptron and convolutional neural networks (CNNs), such as LeNet-5, VGG-16, and a custom CNN named C4W-1, are simulated to evaluate the effects of these synaptic devices on the training and inference outcomes.
arXiv Detail & Related papers (2023-06-26T12:36:07Z) - Runtime Construction of Large-Scale Spiking Neuronal Network Models on
GPU Devices [0.0]
We propose a new method for creating network connections interactively, dynamically, and directly in GPU memory.
We validate the simulation performance with both consumer and data center GPUs on two neuroscientifically relevant models.
Both network construction and simulation times are comparable or shorter than those obtained with other state-of-the-art simulation technologies.
arXiv Detail & Related papers (2023-06-16T14:08:27Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Deep Cellular Recurrent Network for Efficient Analysis of Time-Series
Data with Spatial Information [52.635997570873194]
This work proposes a novel deep cellular recurrent neural network (DCRNN) architecture to process complex multi-dimensional time series data with spatial information.
The proposed architecture achieves state-of-the-art performance while utilizing substantially less trainable parameters when compared to comparable methods in the literature.
arXiv Detail & Related papers (2021-01-12T20:08:18Z) - Fast simulations of highly-connected spiking cortical models using GPUs [0.0]
We present a library for large-scale simulations of spiking neural network models written in the C++ programming languages.
We will show that the proposed library achieves state-of-the-art performance in terms of simulation time per second of biological activity.
arXiv Detail & Related papers (2020-07-28T13:58:50Z) - One-step regression and classification with crosspoint resistive memory
arrays [62.997667081978825]
High speed, low energy computing machines are in demand to enable real-time artificial intelligence at the edge.
One-step learning is supported by simulations of the prediction of the cost of a house in Boston and the training of a 2-layer neural network for MNIST digit recognition.
Results are all obtained in one computational step, thanks to the physical, parallel, and analog computing within the crosspoint array.
arXiv Detail & Related papers (2020-05-05T08:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.