One-hot Generalized Linear Model for Switching Brain State Discovery
- URL: http://arxiv.org/abs/2310.15263v1
- Date: Mon, 23 Oct 2023 18:10:22 GMT
- Title: One-hot Generalized Linear Model for Switching Brain State Discovery
- Authors: Chengrui Li, Soon Ho Kim, Chris Rodgers, Hannah Choi, Anqi Wu
- Abstract summary: Inferred neural interactions from neural signals primarily reflect functional interactions.
We will show that the learned prior should capture the state-constant interaction, shedding light on the underlying anatomical connectome.
Our methods effectively recover true interaction structures in simulated data, achieve the highest predictive likelihood with real neural datasets, and render interaction structures and hidden states more interpretable.
- Score: 1.0132677989820746
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Exposing meaningful and interpretable neural interactions is critical to
understanding neural circuits. Inferred neural interactions from neural signals
primarily reflect functional interactions. In a long experiment, subject
animals may experience different stages defined by the experiment, stimuli, or
behavioral states, and hence functional interactions can change over time. To
model dynamically changing functional interactions, prior work employs
state-switching generalized linear models with hidden Markov models (i.e.,
HMM-GLMs). However, we argue they lack biological plausibility, as functional
interactions are shaped and confined by the underlying anatomical connectome.
Here, we propose a novel prior-informed state-switching GLM. We introduce both
a Gaussian prior and a one-hot prior over the GLM in each state. The priors are
learnable. We will show that the learned prior should capture the
state-constant interaction, shedding light on the underlying anatomical
connectome and revealing more likely physical neuron interactions. The
state-dependent interaction modeled by each GLM offers traceability to capture
functional variations across multiple brain states. Our methods effectively
recover true interaction structures in simulated data, achieve the highest
predictive likelihood with real neural datasets, and render interaction
structures and hidden states more interpretable when applied to real neural
data.
Related papers
- SynapsNet: Enhancing Neuronal Population Dynamics Modeling via Learning Functional Connectivity [0.0]
We introduce SynapsNet, a novel deep-learning framework that effectively models population dynamics and functional interactions between neurons.
A shared decoder uses the input current, previous neuronal activity, neuron embedding, and behavioral data to predict the population activity in the next time step.
Our experiments, conducted on mouse cortical activity from publicly available datasets, demonstrate that SynapsNet consistently outperforms existing models in forecasting population activity.
arXiv Detail & Related papers (2024-11-12T22:25:15Z) - Learning dynamic representations of the functional connectome in
neurobiological networks [41.94295877935867]
We introduce an unsupervised approach to learn the dynamic affinities between neurons in live, behaving animals.
We show that our method is able to robustly predict causal interactions between neurons to generate behavior.
arXiv Detail & Related papers (2024-02-21T19:54:25Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Learning with Chemical versus Electrical Synapses -- Does it Make a
Difference? [61.85704286298537]
Bio-inspired neural networks have the potential to advance our understanding of neural computation and improve the state-of-the-art of AI systems.
We conduct experiments with autonomous lane-keeping through a photorealistic autonomous driving simulator to evaluate their performance under diverse conditions.
arXiv Detail & Related papers (2023-11-21T13:07:20Z) - Learnable latent embeddings for joint behavioral and neural analysis [3.6062449190184136]
We show that CEBRA can be used for the mapping of space, uncovering complex kinematic features, and rapid, high-accuracy decoding of natural movies from visual cortex.
We validate its accuracy and demonstrate its utility for both calcium and electrophysiology datasets, across sensory and motor tasks, and in simple or complex behaviors across species.
arXiv Detail & Related papers (2022-04-01T19:19:33Z) - Overcoming the Domain Gap in Neural Action Representations [60.47807856873544]
3D pose data can now be reliably extracted from multi-view video sequences without manual intervention.
We propose to use it to guide the encoding of neural action representations together with a set of neural and behavioral augmentations.
To reduce the domain gap, during training, we swap neural and behavioral data across animals that seem to be performing similar actions.
arXiv Detail & Related papers (2021-12-02T12:45:46Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z) - Continual Learning with Deep Artificial Neurons [0.0]
We introduce Deep Artificial Neurons (DANs), which are themselves realized as deep neural networks.
We demonstrate that it is possible to meta-learn a single parameter vector, which we dub a neuronal phenotype, shared by all DANs in the network.
We show that a suitable neuronal phenotype can endow a single network with an innate ability to update its synapses with minimal forgetting.
arXiv Detail & Related papers (2020-11-13T17:50:10Z) - Efficient Inference of Flexible Interaction in Spiking-neuron Networks [41.83710212492543]
We use the nonlinear Hawkes process to model excitatory or inhibitory interactions among neurons.
We show our algorithm can estimate the temporal dynamics of interaction and reveal the interpretable functional connectivity underlying neural spike trains.
arXiv Detail & Related papers (2020-06-23T09:10:30Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.