Natural-gradient learning for spiking neurons
- URL: http://arxiv.org/abs/2011.11710v2
- Date: Wed, 23 Feb 2022 19:29:15 GMT
- Title: Natural-gradient learning for spiking neurons
- Authors: Elena Kreutzer, Walter M. Senn, Mihai A. Petrovici
- Abstract summary: In many normative theories of synaptic plasticity, weight updates implicitly depend on the chosen parametrization of the weights.
We propose that plasticity instead follows natural gradient descent.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In many normative theories of synaptic plasticity, weight updates implicitly
depend on the chosen parametrization of the weights. This problem relates, for
example, to neuronal morphology: synapses which are functionally equivalent in
terms of their impact on somatic firing can differ substantially in spine size
due to their different positions along the dendritic tree. Classical theories
based on Euclidean gradient descent can easily lead to inconsistencies due to
such parametrization dependence. The issues are solved in the framework of
Riemannian geometry, in which we propose that plasticity instead follows
natural gradient descent. Under this hypothesis, we derive a synaptic learning
rule for spiking neurons that couples functional efficiency with the
explanation of several well-documented biological phenomena such as dendritic
democracy, multiplicative scaling and heterosynaptic plasticity. We therefore
suggest that in its search for functional synaptic plasticity, evolution might
have come up with its own version of natural gradient descent.
Related papers
- The natural stability of autonomous morphology [0.0]
We propose an explanation for the resilience of autonomous morphology.
Dissociative evidence creates a repulsion dynamic which prevents morphomic classes from collapsing.
We show that autonomous morphology, far from being unnatural' (e.g. citealtAronoff), is rather the natural (rational) process of inference applied to inflectional systems.
arXiv Detail & Related papers (2024-11-06T10:14:58Z) - Automated Model Discovery for Tensional Homeostasis: Constitutive Machine Learning in Growth and Remodeling [0.0]
We extend our inelastic Constitutive Artificial Neural Networks (iCANNs) by incorporating kinematic growth and homeostatic surfaces.
We evaluate the ability of the proposed network to learn from experimentally obtained tissue equivalent data at the material point level.
arXiv Detail & Related papers (2024-10-17T15:12:55Z) - Don't Cut Corners: Exact Conditions for Modularity in Biologically Inspired Representations [52.48094670415497]
We develop a theory of when biologically inspired representations modularise with respect to source variables (sources)
We derive necessary and sufficient conditions on a sample of sources that determine whether the neurons in an optimal biologically-inspired linear autoencoder modularise.
Our theory applies to any dataset, extending far beyond the case of statistical independence studied in previous work.
arXiv Detail & Related papers (2024-10-08T17:41:37Z) - Learning with Chemical versus Electrical Synapses -- Does it Make a
Difference? [61.85704286298537]
Bio-inspired neural networks have the potential to advance our understanding of neural computation and improve the state-of-the-art of AI systems.
We conduct experiments with autonomous lane-keeping through a photorealistic autonomous driving simulator to evaluate their performance under diverse conditions.
arXiv Detail & Related papers (2023-11-21T13:07:20Z) - Synaptic Weight Distributions Depend on the Geometry of Plasticity [26.926824735306212]
We show that the distribution of synaptic weights will depend on the geometry of synaptic plasticity.
It should be possible to experimentally determine the true geometry of synaptic plasticity in the brain.
arXiv Detail & Related papers (2023-05-30T20:16:23Z) - Theory of coupled neuronal-synaptic dynamics [3.626013617212667]
In neural circuits, synaptic strengths influence neuronal activity by shaping network dynamics.
We study a recurrent-network model in which neuronal units and synaptic couplings are interacting dynamic variables.
We show that adding Hebbian plasticity slows activity in chaotic networks and can induce chaos.
arXiv Detail & Related papers (2023-02-17T16:42:59Z) - Brain Cortical Functional Gradients Predict Cortical Folding Patterns
via Attention Mesh Convolution [51.333918985340425]
We develop a novel attention mesh convolution model to predict cortical gyro-sulcal segmentation maps on individual brains.
Experiments show that the prediction performance via our model outperforms other state-of-the-art models.
arXiv Detail & Related papers (2022-05-21T14:08:53Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z) - Formalising the Use of the Activation Function in Neural Inference [0.0]
We discuss how a spike in a biological neurone belongs to a particular class of phase transitions in statistical physics.
We show that the artificial neurone is, mathematically, a mean field model of biological neural membrane dynamics.
This allows us to treat selective neural firing in an abstract way, and formalise the role of the activation function in perceptron learning.
arXiv Detail & Related papers (2021-02-02T19:42:21Z) - Learning compositional functions via multiplicative weight updates [97.9457834009578]
We show that multiplicative weight updates satisfy a descent lemma tailored to compositional functions.
We show that Madam can train state of the art neural network architectures without learning rate tuning.
arXiv Detail & Related papers (2020-06-25T17:05:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.