Backpropamine: training self-modifying neural networks with
differentiable neuromodulated plasticity
- URL: http://arxiv.org/abs/2002.10585v1
- Date: Mon, 24 Feb 2020 23:19:17 GMT
- Title: Backpropamine: training self-modifying neural networks with
differentiable neuromodulated plasticity
- Authors: Thomas Miconi and Aditya Rawal and Jeff Clune and Kenneth O. Stanley
- Abstract summary: We show for the first time that artificial neural networks with such neuromodulated plasticity can be trained with gradient descent.
We show that neuromodulated plasticity improves the performance of neural networks on both reinforcement learning and supervised learning tasks.
- Score: 14.19992298135814
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The impressive lifelong learning in animal brains is primarily enabled by
plastic changes in synaptic connectivity. Importantly, these changes are not
passive, but are actively controlled by neuromodulation, which is itself under
the control of the brain. The resulting self-modifying abilities of the brain
play an important role in learning and adaptation, and are a major basis for
biological reinforcement learning. Here we show for the first time that
artificial neural networks with such neuromodulated plasticity can be trained
with gradient descent. Extending previous work on differentiable Hebbian
plasticity, we propose a differentiable formulation for the neuromodulation of
plasticity. We show that neuromodulated plasticity improves the performance of
neural networks on both reinforcement learning and supervised learning tasks.
In one task, neuromodulated plastic LSTMs with millions of parameters
outperform standard LSTMs on a benchmark language modeling task (controlling
for the number of parameters). We conclude that differentiable neuromodulation
of plasticity offers a powerful new framework for training neural networks.
Related papers
- Brain-like Functional Organization within Large Language Models [58.93629121400745]
The human brain has long inspired the pursuit of artificial intelligence (AI)
Recent neuroimaging studies provide compelling evidence of alignment between the computational representation of artificial neural networks (ANNs) and the neural responses of the human brain to stimuli.
In this study, we bridge this gap by directly coupling sub-groups of artificial neurons with functional brain networks (FBNs)
This framework links the AN sub-groups to FBNs, enabling the delineation of brain-like functional organization within large language models (LLMs)
arXiv Detail & Related papers (2024-10-25T13:15:17Z) - Neuroplastic Expansion in Deep Reinforcement Learning [9.297543779239826]
We propose a novel approach, Neuroplastic Expansion (NE), inspired by cortical expansion in cognitive science.
NE maintains learnability and adaptability throughout the entire training process by dynamically growing the network from a smaller initial size to its full dimension.
Our method is designed with three key components: (1) elastic neuron generation based on potential gradients, (2) dormant neuron pruning to optimize network expressivity, and (3) neuron consolidation via experience review.
arXiv Detail & Related papers (2024-10-10T14:51:14Z) - Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - Learning with Chemical versus Electrical Synapses -- Does it Make a
Difference? [61.85704286298537]
Bio-inspired neural networks have the potential to advance our understanding of neural computation and improve the state-of-the-art of AI systems.
We conduct experiments with autonomous lane-keeping through a photorealistic autonomous driving simulator to evaluate their performance under diverse conditions.
arXiv Detail & Related papers (2023-11-21T13:07:20Z) - Control of synaptic plasticity via the fusion of reinforcement learning
and unsupervised learning in neural networks [0.0]
In cognitive neuroscience, it is widely accepted that synaptic plasticity plays an essential role in our amazing learning capability.
With this inspiration, a new learning rule is proposed via the fusion of reinforcement learning and unsupervised learning.
In the proposed computational model, the nonlinear optimal control theory is used to resemble the error feedback loop systems.
arXiv Detail & Related papers (2023-03-26T12:18:03Z) - Hebbian and Gradient-based Plasticity Enables Robust Memory and Rapid
Learning in RNNs [13.250455334302288]
Evidence supports that synaptic plasticity plays a critical role in memory formation and fast learning.
We equip Recurrent Neural Networks with plasticity rules to enable them to adapt their parameters according to ongoing experiences.
Our models show promising results on sequential and associative memory tasks, illustrating their ability to robustly form and retain memories.
arXiv Detail & Related papers (2023-02-07T03:42:42Z) - SpikePropamine: Differentiable Plasticity in Spiking Neural Networks [0.0]
We introduce a framework for learning the dynamics of synaptic plasticity and neuromodulated synaptic plasticity in Spiking Neural Networks (SNNs)
We show that SNNs augmented with differentiable plasticity are sufficient for solving a set of challenging temporal learning tasks.
These networks are also shown to be capable of producing locomotion on a high-dimensional robotic learning task.
arXiv Detail & Related papers (2021-06-04T19:29:07Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z) - Artificial Neural Variability for Deep Learning: On Overfitting, Noise
Memorization, and Catastrophic Forgetting [135.0863818867184]
artificial neural variability (ANV) helps artificial neural networks learn some advantages from natural'' neural networks.
ANV plays as an implicit regularizer of the mutual information between the training data and the learned model.
It can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.
arXiv Detail & Related papers (2020-11-12T06:06:33Z) - Adaptive Reinforcement Learning through Evolving Self-Modifying Neural
Networks [0.0]
Current methods in Reinforcement Learning (RL) only adjust to new interactions after reflection over a specified time interval.
Recent work addressing this by endowing artificial neural networks with neuromodulated plasticity have been shown to improve performance on simple RL tasks trained using backpropagation.
Here we study the problem of meta-learning in a challenging quadruped domain, where each leg of the quadruped has a chance of becoming unusable.
Results demonstrate that agents evolved using self-modifying plastic networks are more capable of adapting to complex meta-learning learning tasks, even outperforming the same network updated using gradient
arXiv Detail & Related papers (2020-05-22T02:24:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.