Neuron-centric Hebbian Learning
- URL: http://arxiv.org/abs/2403.12076v2
- Date: Tue, 16 Apr 2024 08:19:47 GMT
- Title: Neuron-centric Hebbian Learning
- Authors: Andrea Ferigo, Elia Cunegatti, Giovanni Iacca,
- Abstract summary: We propose a novel plasticity model, called Neuron-centric Hebbian Learning (NcHL)
Compared to the ABCD rule, NcHL reduces the parameters from $5W$ to $5N$, being $W$ and $N$ the number of weights and neurons, and usually $N ll W$.
We also devise a weightless'' NcHL model, which requires less memory by approximating the weights based on a record of neuron activations.
- Score: 3.195234044113248
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: One of the most striking capabilities behind the learning mechanisms of the brain is the adaptation, through structural and functional plasticity, of its synapses. While synapses have the fundamental role of transmitting information across the brain, several studies show that it is the neuron activations that produce changes on synapses. Yet, most plasticity models devised for artificial Neural Networks (NNs), e.g., the ABCD rule, focus on synapses, rather than neurons, therefore optimizing synaptic-specific Hebbian parameters. This approach, however, increases the complexity of the optimization process since each synapse is associated to multiple Hebbian parameters. To overcome this limitation, we propose a novel plasticity model, called Neuron-centric Hebbian Learning (NcHL), where optimization focuses on neuron- rather than synaptic-specific Hebbian parameters. Compared to the ABCD rule, NcHL reduces the parameters from $5W$ to $5N$, being $W$ and $N$ the number of weights and neurons, and usually $N \ll W$. We also devise a ``weightless'' NcHL model, which requires less memory by approximating the weights based on a record of neuron activations. Our experiments on two robotic locomotion tasks reveal that NcHL performs comparably to the ABCD rule, despite using up to $\sim97$ times less parameters, thus allowing for scalable plasticity
Related papers
- Gated Parametric Neuron for Spike-based Audio Recognition [26.124844943674407]
Spiking neural networks (SNNs) aim to simulate real neural networks in the human brain with biologically plausible neurons.
This paper proposes a leaky parametric neuron (GPN) to process-temporal information effectively with gating mechanism.
arXiv Detail & Related papers (2024-12-02T03:46:26Z) - Let's Focus on Neuron: Neuron-Level Supervised Fine-tuning for Large Language Model [43.107778640669544]
Large Language Models (LLMs) are composed of neurons that exhibit various behaviors and roles.
Recent studies have revealed that not all neurons are active across different datasets.
We introduce Neuron-Level Fine-Tuning (NeFT), a novel approach that refines the granularity of parameter training down to the individual neuron.
arXiv Detail & Related papers (2024-03-18T09:55:01Z) - Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Synaptic Stripping: How Pruning Can Bring Dead Neurons Back To Life [0.0]
We introduce Synaptic Stripping as a means to combat the dead neuron problem.
By automatically removing problematic connections during training, we can regenerate dead neurons.
We conduct several ablation studies to investigate these dynamics as a function of network width and depth.
arXiv Detail & Related papers (2023-02-11T23:55:50Z) - Cross-Model Comparative Loss for Enhancing Neuronal Utility in Language
Understanding [82.46024259137823]
We propose a cross-model comparative loss for a broad range of tasks.
We demonstrate the universal effectiveness of comparative loss through extensive experiments on 14 datasets from 3 distinct NLU tasks.
arXiv Detail & Related papers (2023-01-10T03:04:27Z) - Short-Term Plasticity Neurons Learning to Learn and Forget [0.0]
Short-term plasticity (STP) is a mechanism that stores decaying memories in synapses of the cerebral cortex.
Here we present a new type of recurrent neural unit, the Atari Neuron (STPN), which indeed turns out strikingly powerful.
arXiv Detail & Related papers (2022-06-28T14:47:56Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Backpropamine: training self-modifying neural networks with
differentiable neuromodulated plasticity [14.19992298135814]
We show for the first time that artificial neural networks with such neuromodulated plasticity can be trained with gradient descent.
We show that neuromodulated plasticity improves the performance of neural networks on both reinforcement learning and supervised learning tasks.
arXiv Detail & Related papers (2020-02-24T23:19:17Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.