Astrocytes as a mechanism for meta-plasticity and contextually-guided
network function
- URL: http://arxiv.org/abs/2311.03508v2
- Date: Fri, 10 Nov 2023 15:59:40 GMT
- Title: Astrocytes as a mechanism for meta-plasticity and contextually-guided
network function
- Authors: Lulu Gong, Fabio Pasqualetti, Thomas Papouin and ShiNung Ching
- Abstract summary: Astrocytes are a ubiquitous and enigmatic type of non-neuronal cell.
Astrocytes may play a more direct and active role in brain function and neural computation.
- Score: 2.66269503676104
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Astrocytes are a ubiquitous and enigmatic type of non-neuronal cell and are
found in the brain of all vertebrates. While traditionally viewed as being
supportive of neurons, it is increasingly recognized that astrocytes may play a
more direct and active role in brain function and neural computation. On
account of their sensitivity to a host of physiological covariates and ability
to modulate neuronal activity and connectivity on slower time scales,
astrocytes may be particularly well poised to modulate the dynamics of neural
circuits in functionally salient ways. In the current paper, we seek to capture
these features via actionable abstractions within computational models of
neuron-astrocyte interaction. Specifically, we engage how nested feedback loops
of neuron-astrocyte interaction, acting over separated time-scales may endow
astrocytes with the capability to enable learning in context-dependent
settings, where fluctuations in task parameters may occur much more slowly than
within-task requirements. We pose a general model of neuron-synapse-astrocyte
interaction and use formal analysis to characterize how astrocytic modulation
may constitute a form of meta-plasticity, altering the ways in which synapses
and neurons adapt as a function of time. We then embed this model in a
bandit-based reinforcement learning task environment, and show how the presence
of time-scale separated astrocytic modulation enables learning over multiple
fluctuating contexts. Indeed, these networks learn far more reliably versus
dynamically homogeneous networks and conventional non-network-based bandit
algorithms. Our results indicate how the presence of neuron-astrocyte
interaction in the brain may benefit learning over different time-scales and
the conveyance of task-relevant contextual information onto circuit dynamics.
Related papers
- Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
We introduce Artificial Kuramotoy Neurons (AKOrN) as a dynamical alternative to threshold units.
We show that this idea provides performance improvements across a wide spectrum of tasks.
We believe that these empirical results show the importance of our assumptions at the most basic neuronal level of neural representation.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - Exploring neural oscillations during speech perception via surrogate gradient spiking neural networks [59.38765771221084]
We present a physiologically inspired speech recognition architecture compatible and scalable with deep learning frameworks.
We show end-to-end gradient descent training leads to the emergence of neural oscillations in the central spiking neural network.
Our findings highlight the crucial inhibitory role of feedback mechanisms, such as spike frequency adaptation and recurrent connections, in regulating and synchronising neural activity to improve recognition performance.
arXiv Detail & Related papers (2024-04-22T09:40:07Z) - Backpropagation through space, time, and the brain [2.10686639478348]
We introduce General Latent Equilibrium, a computational framework for fully local-temporal credit assignment in physical, dynamical networks of neurons.
In particular, GLE exploits the morphology of dendritic trees to enable more complex information storage and processing in single neurons.
arXiv Detail & Related papers (2024-03-25T16:57:02Z) - Learning dynamic representations of the functional connectome in
neurobiological networks [41.94295877935867]
We introduce an unsupervised approach to learn the dynamic affinities between neurons in live, behaving animals.
We show that our method is able to robustly predict causal interactions between neurons to generate behavior.
arXiv Detail & Related papers (2024-02-21T19:54:25Z) - Astrocyte-Enabled Advancements in Spiking Neural Networks for Large
Language Modeling [7.863029550014263]
Astrocyte-Modulated Spiking Neural Network (AstroSNN) exhibits exceptional performance in tasks involving memory retention and natural language generation.
AstroSNN shows low latency, high throughput, and reduced memory usage in practical applications.
arXiv Detail & Related papers (2023-12-12T06:56:31Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z) - Advantages of biologically-inspired adaptive neural activation in RNNs
during learning [10.357949759642816]
We introduce a novel parametric family of nonlinear activation functions inspired by input-frequency response curves of biological neurons.
We find that activation adaptation provides distinct task-specific solutions and in some cases, improves both learning speed and performance.
arXiv Detail & Related papers (2020-06-22T13:49:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.