Gradient-based Neuromorphic Learning on Dynamical RRAM Arrays
- URL: http://arxiv.org/abs/2206.12992v1
- Date: Sun, 26 Jun 2022 23:13:34 GMT
- Title: Gradient-based Neuromorphic Learning on Dynamical RRAM Arrays
- Authors: Peng Zhou, Jason K. Eshraghian, Dong-Uk Choi, Wei D. Lu, Sung-Mo Kang
- Abstract summary: We present MEMprop, the adoption of gradient-based learning to train fully memristive spiking neural networks (MSNNs)
Our approach harnesses intrinsic device dynamics to trigger naturally arising voltage spikes.
We obtain highly competitive accuracy amongst previously reported lightweight dense fully MSNNs on several benchmarks.
- Score: 3.5969667977870796
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present MEMprop, the adoption of gradient-based learning to train fully
memristive spiking neural networks (MSNNs). Our approach harnesses intrinsic
device dynamics to trigger naturally arising voltage spikes. These spikes
emitted by memristive dynamics are analog in nature, and thus fully
differentiable, which eliminates the need for surrogate gradient methods that
are prevalent in the spiking neural network (SNN) literature. Memristive neural
networks typically either integrate memristors as synapses that map
offline-trained networks, or otherwise rely on associative learning mechanisms
to train networks of memristive neurons. We instead apply the backpropagation
through time (BPTT) training algorithm directly on analog SPICE models of
memristive neurons and synapses. Our implementation is fully memristive, in
that synaptic weights and spiking neurons are both integrated on resistive RAM
(RRAM) arrays without the need for additional circuits to implement spiking
dynamics, e.g., analog-to-digital converters (ADCs) or thresholded comparators.
As a result, higher-order electrophysical effects are fully exploited to use
the state-driven dynamics of memristive neurons at run time. By moving towards
non-approximate gradient-based learning, we obtain highly competitive accuracy
amongst previously reported lightweight dense fully MSNNs on several
benchmarks.
Related papers
- Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - SpikingJelly: An open-source machine learning infrastructure platform
for spike-based intelligence [51.6943465041708]
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency.
We contribute a full-stack toolkit for pre-processing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips.
arXiv Detail & Related papers (2023-10-25T13:15:17Z) - Neuromorphic Hebbian learning with magnetic tunnel junction synapses [41.92764939721262]
We propose and experimentally demonstrate neuromorphic networks that provide high-accuracy inference thanks to the binary resistance states of magnetic tunnel junctions (MTJs)
We performed the first demonstration of a neuromorphic network directly implemented with MTJ synapses, for both inference and spike-timing-dependent plasticity learning.
We also demonstrated through simulation that the proposed system for unsupervised Hebbian learning with STT-MTJ synapses can achieve competitive accuracies for MNIST handwritten digit recognition.
arXiv Detail & Related papers (2023-08-21T19:58:44Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - SPICEprop: Backpropagating Errors Through Memristive Spiking Neural
Networks [2.8971214387667494]
We present a fully memristive spiking neural network (MSNN) consisting of novel memristive neurons trained using the backpropagation through time (BPTT) learning rule.
Gradient descent is applied directly to the memristive integrated-and-fire (MIF) neuron designed using analog SPICE circuit models.
We achieve 97.58% accuracy on the MNIST testing dataset and 75.26% on the Fashion-MNIST testing dataset, the highest accuracies among all fully MSNNs.
arXiv Detail & Related papers (2022-03-02T21:34:43Z) - A Fully Memristive Spiking Neural Network with Unsupervised Learning [2.8971214387667494]
The system is fully memristive in that both neuronal and synaptic dynamics can be realized by using memristors.
The proposed MSNN implements STDP learning by using cumulative weight changes in memristive synapses from the voltage waveform changes across the synapses.
arXiv Detail & Related papers (2022-03-02T21:16:46Z) - Training End-to-End Analog Neural Networks with Equilibrium Propagation [64.0476282000118]
We introduce a principled method to train end-to-end analog neural networks by gradient descent.
We show mathematically that a class of analog neural networks (called nonlinear resistive networks) are energy-based models.
Our work can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning.
arXiv Detail & Related papers (2020-06-02T23:38:35Z) - Structural plasticity on an accelerated analog neuromorphic hardware
system [0.46180371154032884]
We present a strategy to achieve structural plasticity by constantly rewiring the pre- and gpostsynaptic partners.
We implemented this algorithm on the analog neuromorphic system BrainScaleS-2.
We evaluated our implementation in a simple supervised learning scenario, showing its ability to optimize the network topology.
arXiv Detail & Related papers (2019-12-27T10:15:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.