Hebbian and Gradient-based Plasticity Enables Robust Memory and Rapid
Learning in RNNs
- URL: http://arxiv.org/abs/2302.03235v1
- Date: Tue, 7 Feb 2023 03:42:42 GMT
- Title: Hebbian and Gradient-based Plasticity Enables Robust Memory and Rapid
Learning in RNNs
- Authors: Yu Duan, Zhongfan Jia, Qian Li, Yi Zhong, Kaisheng Ma
- Abstract summary: Evidence supports that synaptic plasticity plays a critical role in memory formation and fast learning.
We equip Recurrent Neural Networks with plasticity rules to enable them to adapt their parameters according to ongoing experiences.
Our models show promising results on sequential and associative memory tasks, illustrating their ability to robustly form and retain memories.
- Score: 13.250455334302288
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Rapidly learning from ongoing experiences and remembering past events with a
flexible memory system are two core capacities of biological intelligence.
While the underlying neural mechanisms are not fully understood, various
evidence supports that synaptic plasticity plays a critical role in memory
formation and fast learning. Inspired by these results, we equip Recurrent
Neural Networks (RNNs) with plasticity rules to enable them to adapt their
parameters according to ongoing experiences. In addition to the traditional
local Hebbian plasticity, we propose a global, gradient-based plasticity rule,
which allows the model to evolve towards its self-determined target. Our models
show promising results on sequential and associative memory tasks, illustrating
their ability to robustly form and retain memories. In the meantime, these
models can cope with many challenging few-shot learning problems. Comparing
different plasticity rules under the same framework shows that Hebbian
plasticity is well-suited for several memory and associative learning tasks;
however, it is outperformed by gradient-based plasticity on few-shot regression
tasks which require the model to infer the underlying mapping. Code is
available at https://github.com/yuvenduan/PlasticRNNs.
Related papers
- Stable Hadamard Memory: Revitalizing Memory-Augmented Agents for Reinforcement Learning [64.93848182403116]
Current deep-learning memory models struggle in reinforcement learning environments that are partially observable and long-term.
We introduce the Stable Hadamard Memory, a novel memory model for reinforcement learning agents.
Our approach significantly outperforms state-of-the-art memory-based methods on challenging partially observable benchmarks.
arXiv Detail & Related papers (2024-10-14T03:50:17Z) - Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Neuromimetic metaplasticity for adaptive continual learning [2.1749194587826026]
We propose a metaplasticity model inspired by human working memory to achieve catastrophic forgetting-free continual learning.
A key aspect of our approach involves implementing distinct types of synapses from stable to flexible, and randomly intermixing them to train synaptic connections with different degrees of flexibility.
The model achieved a balanced tradeoff between memory capacity and performance without requiring additional training or structural modifications.
arXiv Detail & Related papers (2024-07-09T12:21:35Z) - MindBridge: A Cross-Subject Brain Decoding Framework [60.58552697067837]
Brain decoding aims to reconstruct stimuli from acquired brain signals.
Currently, brain decoding is confined to a per-subject-per-model paradigm.
We present MindBridge, that achieves cross-subject brain decoding by employing only one model.
arXiv Detail & Related papers (2024-04-11T15:46:42Z) - Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - PLASTIC: Improving Input and Label Plasticity for Sample Efficient
Reinforcement Learning [54.409634256153154]
In Reinforcement Learning (RL), enhancing sample efficiency is crucial.
In principle, off-policy RL algorithms can improve sample efficiency by allowing multiple updates per environment interaction.
Our study investigates the underlying causes of this phenomenon by dividing plasticity into two aspects.
arXiv Detail & Related papers (2023-06-19T06:14:51Z) - Meta-Learning in Spiking Neural Networks with Reward-Modulated STDP [2.179313476241343]
We propose a bio-plausible meta-learning model inspired by the hippocampus and the prefrontal cortex.
Our new model can easily be applied to spike-based neuromorphic devices and enables fast learning in neuromorphic hardware.
arXiv Detail & Related papers (2023-06-07T13:08:46Z) - Memory-enriched computation and learning in spiking neural networks
through Hebbian plasticity [9.453554184019108]
Hebbian plasticity is believed to play a pivotal role in biological memory.
We introduce a novel spiking neural network architecture that is enriched by Hebbian synaptic plasticity.
We show that Hebbian enrichment renders spiking neural networks surprisingly versatile in terms of their computational as well as learning capabilities.
arXiv Detail & Related papers (2022-05-23T12:48:37Z) - Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay [67.50637511633212]
A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
arXiv Detail & Related papers (2021-12-09T07:11:14Z) - Enabling Continual Learning with Differentiable Hebbian Plasticity [18.12749708143404]
Continual learning is the problem of sequentially learning new tasks or knowledge while protecting previously acquired knowledge.
catastrophic forgetting poses a grand challenge for neural networks performing such learning process.
We propose a Differentiable Hebbian Consolidation model which is composed of a Differentiable Hebbian Plasticity.
arXiv Detail & Related papers (2020-06-30T06:42:19Z) - Backpropamine: training self-modifying neural networks with
differentiable neuromodulated plasticity [14.19992298135814]
We show for the first time that artificial neural networks with such neuromodulated plasticity can be trained with gradient descent.
We show that neuromodulated plasticity improves the performance of neural networks on both reinforcement learning and supervised learning tasks.
arXiv Detail & Related papers (2020-02-24T23:19:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.