Meta-Learning through Hebbian Plasticity in Random Networks
- URL: http://arxiv.org/abs/2007.02686v5
- Date: Tue, 19 Apr 2022 10:13:52 GMT
- Title: Meta-Learning through Hebbian Plasticity in Random Networks
- Authors: Elias Najarro and Sebastian Risi
- Abstract summary: Lifelong learning and adaptability are two defining aspects of biological agents.
Inspired by this biological mechanism, we propose a search method that only searches for synapse-specific Hebbian learning rules.
We find that starting from completely random weights, the discovered Hebbian rules enable an agent to navigate a dynamical 2D-pixel environment.
- Score: 12.433600693422235
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lifelong learning and adaptability are two defining aspects of biological
agents. Modern reinforcement learning (RL) approaches have shown significant
progress in solving complex tasks, however once training is concluded, the
found solutions are typically static and incapable of adapting to new
information or perturbations. While it is still not completely understood how
biological brains learn and adapt so efficiently from experience, it is
believed that synaptic plasticity plays a prominent role in this process.
Inspired by this biological mechanism, we propose a search method that, instead
of optimizing the weight parameters of neural networks directly, only searches
for synapse-specific Hebbian learning rules that allow the network to
continuously self-organize its weights during the lifetime of the agent. We
demonstrate our approach on several reinforcement learning tasks with different
sensory modalities and more than 450K trainable plasticity parameters. We find
that starting from completely random weights, the discovered Hebbian rules
enable an agent to navigate a dynamical 2D-pixel environment; likewise they
allow a simulated 3D quadrupedal robot to learn how to walk while adapting to
morphological damage not seen during training and in the absence of any
explicit reward or error signal in less than 100 timesteps. Code is available
at https://github.com/enajx/HebbianMetaLearning.
Related papers
- Life, uh, Finds a Way: Systematic Neural Search [2.163881720692685]
We tackle the challenge of rapidly adapting an agent's behavior to solve continuous problems in settings.
Instead of focusing on deep reinforcement learning, we propose viewing behavior as the physical manifestation of a search procedure.
We describe an algorithm that implicitly enumerates behaviors by regulating the tight feedback loop between execution of behaviors and mutation of the graph.
arXiv Detail & Related papers (2024-10-02T09:06:54Z) - Emulating Brain-like Rapid Learning in Neuromorphic Edge Computing [3.735012564657653]
Digital neuromorphic technology simulates the neural and synaptic processes of the brain using two stages of learning.
We demonstrate our approach using event-driven vision sensor data and the Intel Loihi neuromorphic processor with its plasticity dynamics.
Our methodology can be deployed with arbitrary plasticity models and can be applied to situations demanding quick learning and adaptation at the edge.
arXiv Detail & Related papers (2024-08-28T13:51:52Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Learning efficient backprojections across cortical hierarchies in real
time [1.6474865533365743]
We introduce a bio-plausible method to learn efficient feedback weights in layered cortical hierarchies.
All weights are learned simultaneously with always-on plasticity and using only information locally available to the synapses.
Our method is applicable to a wide class of models and improves on previously known biologically plausible ways of credit assignment.
arXiv Detail & Related papers (2022-12-20T13:54:04Z) - Learning to Modulate Random Weights: Neuromodulation-inspired Neural
Networks For Efficient Continual Learning [1.9580473532948401]
We introduce a novel neural network architecture inspired by neuromodulation in biological nervous systems.
We show that this approach has strong learning performance per task despite the very small number of learnable parameters.
arXiv Detail & Related papers (2022-04-08T21:12:13Z) - Neurosymbolic hybrid approach to driver collision warning [64.02492460600905]
There are two main algorithmic approaches to autonomous driving systems.
Deep learning alone has achieved state-of-the-art results in many areas.
But sometimes it can be very difficult to debug if the deep learning model doesn't work.
arXiv Detail & Related papers (2022-03-28T20:29:50Z) - Fully Online Meta-Learning Without Task Boundaries [80.09124768759564]
We study how meta-learning can be applied to tackle online problems of this nature.
We propose a Fully Online Meta-Learning (FOML) algorithm, which does not require any ground truth knowledge about the task boundaries.
Our experiments show that FOML was able to learn new tasks faster than the state-of-the-art online learning methods.
arXiv Detail & Related papers (2022-02-01T07:51:24Z) - Learning compositional functions via multiplicative weight updates [97.9457834009578]
We show that multiplicative weight updates satisfy a descent lemma tailored to compositional functions.
We show that Madam can train state of the art neural network architectures without learning rate tuning.
arXiv Detail & Related papers (2020-06-25T17:05:19Z) - Adaptive Reinforcement Learning through Evolving Self-Modifying Neural
Networks [0.0]
Current methods in Reinforcement Learning (RL) only adjust to new interactions after reflection over a specified time interval.
Recent work addressing this by endowing artificial neural networks with neuromodulated plasticity have been shown to improve performance on simple RL tasks trained using backpropagation.
Here we study the problem of meta-learning in a challenging quadruped domain, where each leg of the quadruped has a chance of becoming unusable.
Results demonstrate that agents evolved using self-modifying plastic networks are more capable of adapting to complex meta-learning learning tasks, even outperforming the same network updated using gradient
arXiv Detail & Related papers (2020-05-22T02:24:44Z) - Towards Efficient Processing and Learning with Spikes: New Approaches
for Multi-Spike Learning [59.249322621035056]
We propose two new multi-spike learning rules which demonstrate better performance over other baselines on various tasks.
In the feature detection task, we re-examine the ability of unsupervised STDP with its limitations being presented.
Our proposed learning rules can reliably solve the task over a wide range of conditions without specific constraints being applied.
arXiv Detail & Related papers (2020-05-02T06:41:20Z) - Never Stop Learning: The Effectiveness of Fine-Tuning in Robotic
Reinforcement Learning [109.77163932886413]
We show how to adapt vision-based robotic manipulation policies to new variations by fine-tuning via off-policy reinforcement learning.
This adaptation uses less than 0.2% of the data necessary to learn the task from scratch.
We find that our approach of adapting pre-trained policies leads to substantial performance gains over the course of fine-tuning.
arXiv Detail & Related papers (2020-04-21T17:57:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.