Short-Term Plasticity Neurons Learning to Learn and Forget
- URL: http://arxiv.org/abs/2206.14048v1
- Date: Tue, 28 Jun 2022 14:47:56 GMT
- Title: Short-Term Plasticity Neurons Learning to Learn and Forget
- Authors: Hector Garcia Rodriguez, Qinghai Guo, Timoleon Moraitis
- Abstract summary: Short-term plasticity (STP) is a mechanism that stores decaying memories in synapses of the cerebral cortex.
Here we present a new type of recurrent neural unit, the Atari Neuron (STPN), which indeed turns out strikingly powerful.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Short-term plasticity (STP) is a mechanism that stores decaying memories in
synapses of the cerebral cortex. In computing practice, STP has been used, but
mostly in the niche of spiking neurons, even though theory predicts that it is
the optimal solution to certain dynamic tasks. Here we present a new type of
recurrent neural unit, the STP Neuron (STPN), which indeed turns out strikingly
powerful. Its key mechanism is that synapses have a state, propagated through
time by a self-recurrent connection-within-the-synapse. This formulation
enables training the plasticity with backpropagation through time, resulting in
a form of learning to learn and forget in the short term. The STPN outperforms
all tested alternatives, i.e. RNNs, LSTMs, other models with fast weights, and
differentiable plasticity. We confirm this in both supervised and reinforcement
learning (RL), and in tasks such as Associative Retrieval, Maze Exploration,
Atari video games, and MuJoCo robotics. Moreover, we calculate that, in
neuromorphic or biological circuits, the STPN minimizes energy consumption
across models, as it depresses individual synapses dynamically. Based on these,
biological STP may have been a strong evolutionary attractor that maximizes
both efficiency and computational power. The STPN now brings these neuromorphic
advantages also to a broad spectrum of machine learning practice. Code is
available at https://github.com/NeuromorphicComputing/stpn
Related papers
- Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - Neuron-centric Hebbian Learning [3.195234044113248]
We propose a novel plasticity model, called Neuron-centric Hebbian Learning (NcHL)
Compared to the ABCD rule, NcHL reduces the parameters from $5W$ to $5N$, being $W$ and $N$ the number of weights and neurons, and usually $N ll W$.
We also devise a weightless'' NcHL model, which requires less memory by approximating the weights based on a record of neuron activations.
arXiv Detail & Related papers (2024-02-16T17:38:28Z) - Learning with Chemical versus Electrical Synapses -- Does it Make a
Difference? [61.85704286298537]
Bio-inspired neural networks have the potential to advance our understanding of neural computation and improve the state-of-the-art of AI systems.
We conduct experiments with autonomous lane-keeping through a photorealistic autonomous driving simulator to evaluate their performance under diverse conditions.
arXiv Detail & Related papers (2023-11-21T13:07:20Z) - Neuromorphic Hebbian learning with magnetic tunnel junction synapses [41.92764939721262]
We propose and experimentally demonstrate neuromorphic networks that provide high-accuracy inference thanks to the binary resistance states of magnetic tunnel junctions (MTJs)
We performed the first demonstration of a neuromorphic network directly implemented with MTJ synapses, for both inference and spike-timing-dependent plasticity learning.
We also demonstrated through simulation that the proposed system for unsupervised Hebbian learning with STT-MTJ synapses can achieve competitive accuracies for MNIST handwritten digit recognition.
arXiv Detail & Related papers (2023-08-21T19:58:44Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Sequence learning in a spiking neuronal network with memristive synapses [0.0]
A core concept that lies at the heart of brain computation is sequence learning and prediction.
Neuromorphic hardware emulates the way the brain processes information and maps neurons and synapses directly into a physical substrate.
We study the feasibility of using ReRAM devices as a replacement of the biological synapses in the sequence learning model.
arXiv Detail & Related papers (2022-11-29T21:07:23Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - Neuromorphic Algorithm-hardware Codesign for Temporal Pattern Learning [11.781094547718595]
We derive an efficient training algorithm for Leaky Integrate and Fire neurons, which is capable of training a SNN to learn complex spatial temporal patterns.
We have developed a CMOS circuit implementation for a memristor-based network of neuron and synapses which retains critical neural dynamics with reduced complexity.
arXiv Detail & Related papers (2021-04-21T18:23:31Z) - Artificial Neural Variability for Deep Learning: On Overfitting, Noise
Memorization, and Catastrophic Forgetting [135.0863818867184]
artificial neural variability (ANV) helps artificial neural networks learn some advantages from natural'' neural networks.
ANV plays as an implicit regularizer of the mutual information between the training data and the learned model.
It can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.
arXiv Detail & Related papers (2020-11-12T06:06:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.