Synaptic Weight Distributions Depend on the Geometry of Plasticity
- URL: http://arxiv.org/abs/2305.19394v2
- Date: Mon, 4 Mar 2024 20:22:16 GMT
- Title: Synaptic Weight Distributions Depend on the Geometry of Plasticity
- Authors: Roman Pogodin, Jonathan Cornford, Arna Ghosh, Gauthier Gidel,
Guillaume Lajoie, Blake Richards
- Abstract summary: We show that the distribution of synaptic weights will depend on the geometry of synaptic plasticity.
It should be possible to experimentally determine the true geometry of synaptic plasticity in the brain.
- Score: 26.926824735306212
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A growing literature in computational neuroscience leverages gradient descent
and learning algorithms that approximate it to study synaptic plasticity in the
brain. However, the vast majority of this work ignores a critical underlying
assumption: the choice of distance for synaptic changes - i.e. the geometry of
synaptic plasticity. Gradient descent assumes that the distance is Euclidean,
but many other distances are possible, and there is no reason that biology
necessarily uses Euclidean geometry. Here, using the theoretical tools provided
by mirror descent, we show that the distribution of synaptic weights will
depend on the geometry of synaptic plasticity. We use these results to show
that experimentally-observed log-normal weight distributions found in several
brain areas are not consistent with standard gradient descent (i.e. a Euclidean
geometry), but rather with non-Euclidean distances. Finally, we show that it
should be possible to experimentally test for different synaptic geometries by
comparing synaptic weight distributions before and after learning. Overall, our
work shows that the current paradigm in theoretical work on synaptic plasticity
that assumes Euclidean synaptic geometry may be misguided and that it should be
possible to experimentally determine the true geometry of synaptic plasticity
in the brain.
Related papers
- What You See is Not What You Get: Neural Partial Differential Equations and The Illusion of Learning [0.0]
Differentiable Programming for scientific machine learning embeds neural networks inside PDEs, often called as NeuralPDEs, derived from first principle physics.
There is a widespread assumption in the community that NeuralPDEs are more trustworthy and generalizable than black box models.
We ask: Are NeuralPDEs and differentiable programming models trained on PDE simulations as physically interpretable as we think?
arXiv Detail & Related papers (2024-11-22T18:04:46Z) - Alignment and Outer Shell Isotropy for Hyperbolic Graph Contrastive
Learning [69.6810940330906]
We propose a novel contrastive learning framework to learn high-quality graph embedding.
Specifically, we design the alignment metric that effectively captures the hierarchical data-invariant information.
We show that in the hyperbolic space one has to address the leaf- and height-level uniformity which are related to properties of trees.
arXiv Detail & Related papers (2023-10-27T15:31:42Z) - Geometric Neural Diffusion Processes [55.891428654434634]
We extend the framework of diffusion models to incorporate a series of geometric priors in infinite-dimension modelling.
We show that with these conditions, the generative functional model admits the same symmetry.
arXiv Detail & Related papers (2023-07-11T16:51:38Z) - Phenomenological modeling of diverse and heterogeneous synaptic dynamics
at natural density [0.0]
This chapter sheds light on the synaptic organization of the brain from the perspective of computational neuroscience.
It provides an introductory overview on how to account for empirical data in mathematical models, implement them in software, and perform simulations reflecting experiments.
arXiv Detail & Related papers (2022-12-10T19:24:58Z) - Unveiling the Sampling Density in Non-Uniform Geometric Graphs [69.93864101024639]
We consider graphs as geometric graphs: nodes are randomly sampled from an underlying metric space, and any pair of nodes is connected if their distance is less than a specified neighborhood radius.
In a social network communities can be modeled as densely sampled areas, and hubs as nodes with larger neighborhood radius.
We develop methods to estimate the unknown sampling density in a self-supervised fashion.
arXiv Detail & Related papers (2022-10-15T08:01:08Z) - Does the Brain Infer Invariance Transformations from Graph Symmetries? [0.0]
The invariance of natural objects under perceptual changes is possibly encoded in the brain by symmetries in the graph of synaptic connections.
The graph can be established via unsupervised learning in a biologically plausible process across different perceptual modalities.
arXiv Detail & Related papers (2021-11-11T12:35:13Z) - Learning a Single Neuron with Bias Using Gradient Descent [53.15475693468925]
We study the fundamental problem of learning a single neuron with a bias term.
We show that this is a significantly different and more challenging problem than the bias-less case.
arXiv Detail & Related papers (2021-06-02T12:09:55Z) - Identifying the latent space geometry of network models through analysis
of curvature [7.644165047073435]
We present a method to consistently estimate the manifold type, dimension, and curvature from an empirically relevant class of latent spaces.
Our core insight comes by representing the graph as a noisy distance matrix based on the ties between cliques.
arXiv Detail & Related papers (2020-12-19T00:35:29Z) - Natural-gradient learning for spiking neurons [0.0]
In many normative theories of synaptic plasticity, weight updates implicitly depend on the chosen parametrization of the weights.
We propose that plasticity instead follows natural gradient descent.
arXiv Detail & Related papers (2020-11-23T20:26:37Z) - Disentangling by Subspace Diffusion [72.1895236605335]
We show that fully unsupervised factorization of a data manifold is possible if the true metric of the manifold is known.
Our work reduces the question of whether unsupervised metric learning is possible, providing a unifying insight into the geometric nature of representation learning.
arXiv Detail & Related papers (2020-06-23T13:33:19Z) - Geometry of Similarity Comparisons [51.552779977889045]
We show that the ordinal capacity of a space form is related to its dimension and the sign of its curvature.
More importantly, we show that the statistical behavior of the ordinal spread random variables defined on a similarity graph can be used to identify its underlying space form.
arXiv Detail & Related papers (2020-06-17T13:37:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.