Learning dynamic representations of the functional connectome in
neurobiological networks
- URL: http://arxiv.org/abs/2402.14102v2
- Date: Tue, 27 Feb 2024 19:54:21 GMT
- Title: Learning dynamic representations of the functional connectome in
neurobiological networks
- Authors: Luciano Dyballa, Samuel Lang, Alexandra Haslund-Gourley, Eviatar
Yemini, Steven W. Zucker
- Abstract summary: We introduce an unsupervised approach to learn the dynamic affinities between neurons in live, behaving animals.
We show that our method is able to robustly predict causal interactions between neurons to generate behavior.
- Score: 41.94295877935867
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The static synaptic connectivity of neuronal circuits stands in direct
contrast to the dynamics of their function. As in changing community
interactions, different neurons can participate actively in various
combinations to effect behaviors at different times. We introduce an
unsupervised approach to learn the dynamic affinities between neurons in live,
behaving animals, and to reveal which communities form among neurons at
different times. The inference occurs in two major steps. First, pairwise
non-linear affinities between neuronal traces from brain-wide calcium activity
are organized by non-negative tensor factorization (NTF). Each factor specifies
which groups of neurons are most likely interacting for an inferred interval in
time, and for which animals. Finally, a generative model that allows for
weighted community detection is applied to the functional motifs produced by
NTF to reveal a dynamic functional connectome. Since time codes the different
experimental variables (e.g., application of chemical stimuli), this provides
an atlas of neural motifs active during separate stages of an experiment (e.g.,
stimulus application or spontaneous behaviors). Results from our analysis are
experimentally validated, confirming that our method is able to robustly
predict causal interactions between neurons to generate behavior. Code is
available at https://github.com/dyballa/dynamic-connectomes.
Related papers
- Modeling dynamic neural activity by combining naturalistic video stimuli and stimulus-independent latent factors [5.967290675400836]
We propose a probabilistic model that incorporates video inputs along with stimulus-independent latent factors to capture variability in neuronal responses.
After training and testing our model on mouse V1 neuronal responses, we found that it outperforms video-only models in terms of log-likelihood.
We find that the learned latent factors strongly correlate with mouse behavior, although the model was trained without behavior data.
arXiv Detail & Related papers (2024-10-21T16:01:39Z) - Learning with Chemical versus Electrical Synapses -- Does it Make a
Difference? [61.85704286298537]
Bio-inspired neural networks have the potential to advance our understanding of neural computation and improve the state-of-the-art of AI systems.
We conduct experiments with autonomous lane-keeping through a photorealistic autonomous driving simulator to evaluate their performance under diverse conditions.
arXiv Detail & Related papers (2023-11-21T13:07:20Z) - Astrocytes as a mechanism for meta-plasticity and contextually-guided
network function [2.66269503676104]
Astrocytes are a ubiquitous and enigmatic type of non-neuronal cell.
Astrocytes may play a more direct and active role in brain function and neural computation.
arXiv Detail & Related papers (2023-11-06T20:31:01Z) - One-hot Generalized Linear Model for Switching Brain State Discovery [1.0132677989820746]
Inferred neural interactions from neural signals primarily reflect functional interactions.
We will show that the learned prior should capture the state-constant interaction, shedding light on the underlying anatomical connectome.
Our methods effectively recover true interaction structures in simulated data, achieve the highest predictive likelihood with real neural datasets, and render interaction structures and hidden states more interpretable.
arXiv Detail & Related papers (2023-10-23T18:10:22Z) - Equivalence of Additive and Multiplicative Coupling in Spiking Neural
Networks [0.0]
Spiking neural network models characterize the emergent collective dynamics of circuits of biological neurons.
We show that spiking neural network models with additive coupling are equivalent to models with multiplicative coupling.
arXiv Detail & Related papers (2023-03-31T20:19:11Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Two-argument activation functions learn soft XOR operations like
cortical neurons [6.88204255655161]
We learn canonical activation functions with two input arguments, analogous to basal and apical dendrites.
Remarkably, the resultant nonlinearities often produce soft XOR functions.
Networks with these nonlinearities learn faster and perform better than conventional ReLU nonlinearities with matched parameter counts.
arXiv Detail & Related papers (2021-10-13T17:06:20Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z) - Compositional Explanations of Neurons [52.71742655312625]
We describe a procedure for explaining neurons in deep representations by identifying compositional logical concepts.
We use this procedure to answer several questions on interpretability in models for vision and natural language processing.
arXiv Detail & Related papers (2020-06-24T20:37:05Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.