Generalization of generative model for neuronal ensemble inference
method
- URL: http://arxiv.org/abs/2211.05634v3
- Date: Tue, 27 Jun 2023 20:22:13 GMT
- Title: Generalization of generative model for neuronal ensemble inference
method
- Authors: Shun Kimura, Koujin Takeda
- Abstract summary: In this study, we extend the range of the variable for expressing the neuronal state, and generalize the likelihood of the model for extended variables.
This generalization without restriction of the binary input enables us to perform soft clustering and apply the method to non-stationary neuroactivity data.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Various brain functions that are necessary to maintain life activities
materialize through the interaction of countless neurons. Therefore, it is
important to analyze functional neuronal network. To elucidate the mechanism of
brain function, many studies are being actively conducted on functional
neuronal ensemble and hub, including all areas of neuroscience. In addition,
recent study suggests that the existence of functional neuronal ensembles and
hubs contributes to the efficiency of information processing. For these
reasons, there is a demand for methods to infer functional neuronal ensembles
from neuronal activity data, and methods based on Bayesian inference have been
proposed. However, there is a problem in modeling the activity in Bayesian
inference. The features of each neuron's activity have non-stationarity
depending on physiological experimental conditions. As a result, the assumption
of stationarity in Bayesian inference model impedes inference, which leads to
destabilization of inference results and degradation of inference accuracy. In
this study, we extend the range of the variable for expressing the neuronal
state, and generalize the likelihood of the model for extended variables. By
comparing with the previous study, our model can express the neuronal state in
larger space. This generalization without restriction of the binary input
enables us to perform soft clustering and apply the method to non-stationary
neuroactivity data. In addition, for the effectiveness of the method, we apply
the developed method to multiple synthetic fluorescence data generated from the
electrical potential data in leaky integrated-and-fire model.
Related papers
- Learning dynamic representations of the functional connectome in
neurobiological networks [41.94295877935867]
We introduce an unsupervised approach to learn the dynamic affinities between neurons in live, behaving animals.
We show that our method is able to robustly predict causal interactions between neurons to generate behavior.
arXiv Detail & Related papers (2024-02-21T19:54:25Z) - Neuroformer: Multimodal and Multitask Generative Pretraining for Brain Data [3.46029409929709]
State-of-the-art systems neuroscience experiments yield large-scale multimodal data, and these data sets require new tools for analysis.
Inspired by the success of large pretrained models in vision and language domains, we reframe the analysis of large-scale, cellular-resolution neuronal spiking data into an autoregressive generation problem.
We first trained Neuroformer on simulated datasets, and found that it both accurately predicted intrinsically simulated neuronal circuit activity, and also inferred the underlying neural circuit connectivity, including direction.
arXiv Detail & Related papers (2023-10-31T20:17:32Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - Understanding Neural Coding on Latent Manifolds by Sharing Features and
Dividing Ensembles [3.625425081454343]
Systems neuroscience relies on two complementary views of neural data, characterized by single neuron tuning curves and analysis of population activity.
These two perspectives combine elegantly in neural latent variable models that constrain the relationship between latent variables and neural activity.
We propose feature sharing across neural tuning curves, which significantly improves performance and leads to better-behaved optimization.
arXiv Detail & Related papers (2022-10-06T18:37:49Z) - STNDT: Modeling Neural Population Activity with a Spatiotemporal
Transformer [19.329190789275565]
We introduce SpatioTemporal Neural Data Transformer (STNDT), an NDT-based architecture that explicitly models responses of individual neurons.
We show that our model achieves state-of-the-art performance on ensemble level in estimating neural activities across four neural datasets.
arXiv Detail & Related papers (2022-06-09T18:54:23Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Evolving spiking neuron cellular automata and networks to emulate in
vitro neuronal activity [0.0]
We produce spiking neural systems that emulate the patterns of behavior of biological neurons in vitro.
Our models were able to produce a level of network-wide synchrony.
The genomes of the top-performing models indicate the excitability and density of connections in the model play an important role in determining the complexity of the produced activity.
arXiv Detail & Related papers (2021-10-15T17:55:04Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.