Dis-inhibitory neuronal circuits can control the sign of synaptic
plasticity
- URL: http://arxiv.org/abs/2310.19614v2
- Date: Mon, 11 Dec 2023 18:04:14 GMT
- Title: Dis-inhibitory neuronal circuits can control the sign of synaptic
plasticity
- Authors: Julian Rossbroich, Friedemann Zenke
- Abstract summary: We show that error-modulated learning emerges naturally at the circuit level when recurrent inhibition explicitly influences Hebbian plasticity.
Our findings bridge the gap between functional and experimentally observed plasticity rules.
- Score: 6.227678387562755
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: How neuronal circuits achieve credit assignment remains a central unsolved
question in systems neuroscience. Various studies have suggested plausible
solutions for back-propagating error signals through multi-layer networks.
These purely functionally motivated models assume distinct neuronal
compartments to represent local error signals that determine the sign of
synaptic plasticity. However, this explicit error modulation is inconsistent
with phenomenological plasticity models in which the sign depends primarily on
postsynaptic activity. Here we show how a plausible microcircuit model and
Hebbian learning rule derived within an adaptive control theory framework can
resolve this discrepancy. Assuming errors are encoded in top-down
dis-inhibitory synaptic afferents, we show that error-modulated learning
emerges naturally at the circuit level when recurrent inhibition explicitly
influences Hebbian plasticity. The same learning rule accounts for
experimentally observed plasticity in the absence of inhibition and performs
comparably to back-propagation of error (BP) on several non-linearly separable
benchmarks. Our findings bridge the gap between functional and experimentally
observed plasticity rules and make concrete predictions on inhibitory
modulation of excitatory plasticity.
Related papers
- Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Confidence Regulation Neurons in Language Models [91.90337752432075]
This study investigates the mechanisms by which large language models represent and regulate uncertainty in next-token predictions.
Entropy neurons are characterized by an unusually high weight norm and influence the final layer normalization (LayerNorm) scale to effectively scale down the logits.
token frequency neurons, which we describe here for the first time, boost or suppress each token's logit proportionally to its log frequency, thereby shifting the output distribution towards or away from the unigram distribution.
arXiv Detail & Related papers (2024-06-24T01:31:03Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Single-phase deep learning in cortico-cortical networks [1.7249361224827535]
We introduce a new model, bursting cortico-cortical networks (BurstCCN), which integrates bursting activity, short-term plasticity and dendrite-targeting interneurons.
Our results suggest that cortical features across sub-cellular, cellular, microcircuit and systems levels jointly single-phase efficient deep learning in the brain.
arXiv Detail & Related papers (2022-06-23T15:10:57Z) - Modeling Implicit Bias with Fuzzy Cognitive Maps [0.0]
This paper presents a Fuzzy Cognitive Map model to quantify implicit bias in structured datasets.
We introduce a new reasoning mechanism equipped with a normalization-like transfer function that prevents neurons from saturating.
arXiv Detail & Related papers (2021-12-23T17:04:12Z) - A Spiking Neuron Synaptic Plasticity Model Optimized for Unsupervised
Learning [0.0]
Spiking neural networks (SNN) are considered as a perspective basis for performing all kinds of learning tasks - unsupervised, supervised and reinforcement learning.
Learning in SNN is implemented through synaptic plasticity - the rules which determine dynamics of synaptic weights depending usually on activity of the pre- and post-synaptic neurons.
arXiv Detail & Related papers (2021-11-12T15:26:52Z) - Latent Equilibrium: A unified learning theory for arbitrarily fast
computation with arbitrarily slow neurons [0.7340017786387767]
We introduce Latent Equilibrium, a new framework for inference and learning in networks of slow components.
We derive disentangled neuron and synapse dynamics from a prospective energy function.
We show how our principle can be applied to detailed models of cortical microcircuitry.
arXiv Detail & Related papers (2021-10-27T16:15:55Z) - Sign and Relevance Learning [0.0]
Standard models of biologically realistic reinforcement learning employ a global error signal, which implies the use of shallow networks.
In this study, we introduce a novel network that solves this problem by propagating only the sign of the plasticity change.
Neuromodulation can be understood as a rectified error or relevance signal, while the top-down sign of the error signal determines whether long-term depression will occur.
arXiv Detail & Related papers (2021-10-14T11:57:57Z) - Convolutional Filtering and Neural Networks with Non Commutative
Algebras [153.20329791008095]
We study the generalization of non commutative convolutional neural networks.
We show that non commutative convolutional architectures can be stable to deformations on the space of operators.
arXiv Detail & Related papers (2021-08-23T04:22:58Z) - Gradient Starvation: A Learning Proclivity in Neural Networks [97.02382916372594]
Gradient Starvation arises when cross-entropy loss is minimized by capturing only a subset of features relevant for the task.
This work provides a theoretical explanation for the emergence of such feature imbalance in neural networks.
arXiv Detail & Related papers (2020-11-18T18:52:08Z) - Relaxing the Constraints on Predictive Coding Models [62.997667081978825]
Predictive coding is an influential theory of cortical function which posits that the principal computation the brain performs is the minimization of prediction errors.
Standard implementations of the algorithm still involve potentially neurally implausible features such as identical forward and backward weights, backward nonlinear derivatives, and 1-1 error unit connectivity.
In this paper, we show that these features are not integral to the algorithm and can be removed either directly or through learning additional sets of parameters with Hebbian update rules without noticeable harm to learning performance.
arXiv Detail & Related papers (2020-10-02T15:21:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.