Unifying and generalizing models of neural dynamics during
decision-making
- URL: http://arxiv.org/abs/2001.04571v1
- Date: Mon, 13 Jan 2020 23:57:28 GMT
- Title: Unifying and generalizing models of neural dynamics during
decision-making
- Authors: David M. Zoltowski, Jonathan W. Pillow, and Scott W. Linderman
- Abstract summary: We propose a unifying framework for modeling neural activity during decision-making tasks.
The framework includes the canonical drift-diffusion model and enables extensions such as multi-dimensional accumulators, variable and collapsing boundaries, and discrete jumps.
- Score: 27.46508483610472
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An open question in systems and computational neuroscience is how neural
circuits accumulate evidence towards a decision. Fitting models of
decision-making theory to neural activity helps answer this question, but
current approaches limit the number of these models that we can fit to neural
data. Here we propose a unifying framework for modeling neural activity during
decision-making tasks. The framework includes the canonical drift-diffusion
model and enables extensions such as multi-dimensional accumulators, variable
and collapsing boundaries, and discrete jumps. Our framework is based on
constraining the parameters of recurrent state-space models, for which we
introduce a scalable variational Laplace-EM inference algorithm. We applied the
modeling approach to spiking responses recorded from monkey parietal cortex
during two decision-making tasks. We found that a two-dimensional accumulator
better captured the trial-averaged responses of a set of parietal neurons than
a single accumulator model. Next, we identified a variable lower boundary in
the responses of an LIP neuron during a random dot motion task.
Related papers
- Exploring Behavior-Relevant and Disentangled Neural Dynamics with Generative Diffusion Models [2.600709013150986]
Understanding the neural basis of behavior is a fundamental goal in neuroscience.
Our approach, named BeNeDiff'', first identifies a fine-grained and disentangled neural subspace.
It then employs state-of-the-art generative diffusion models to synthesize behavior videos that interpret the neural dynamics of each latent factor.
arXiv Detail & Related papers (2024-10-12T18:28:56Z) - MindBridge: A Cross-Subject Brain Decoding Framework [60.58552697067837]
Brain decoding aims to reconstruct stimuli from acquired brain signals.
Currently, brain decoding is confined to a per-subject-per-model paradigm.
We present MindBridge, that achieves cross-subject brain decoding by employing only one model.
arXiv Detail & Related papers (2024-04-11T15:46:42Z) - Understanding Neural Coding on Latent Manifolds by Sharing Features and
Dividing Ensembles [3.625425081454343]
Systems neuroscience relies on two complementary views of neural data, characterized by single neuron tuning curves and analysis of population activity.
These two perspectives combine elegantly in neural latent variable models that constrain the relationship between latent variables and neural activity.
We propose feature sharing across neural tuning curves, which significantly improves performance and leads to better-behaved optimization.
arXiv Detail & Related papers (2022-10-06T18:37:49Z) - Supervised Parameter Estimation of Neuron Populations from Multiple
Firing Events [3.2826301276626273]
We study an automatic approach of learning the parameters of neuron populations from a training set consisting of pairs of spiking series and parameter labels via supervised learning.
We simulate many neuronal populations at computation at different parameter settings using a neuron model.
We then compare their performance against classical approaches including a genetic search, Bayesian sequential estimation, and a random walk approximate model.
arXiv Detail & Related papers (2022-10-02T03:17:05Z) - Ranking of Communities in Multiplex Spatiotemporal Models of Brain
Dynamics [0.0]
We propose an interpretation of neural HMMs as multiplex brain state graph models we term Hidden Markov Graph Models (HMs)
This interpretation allows for dynamic brain activity to be analysed using the full repertoire of network analysis techniques.
We produce a new tool for determining important communities of brain regions using a random walk-based procedure.
arXiv Detail & Related papers (2022-03-17T12:14:09Z) - A probabilistic latent variable model for detecting structure in binary
data [0.6767885381740952]
We introduce a novel, probabilistic binary latent variable model to detect noisy or approximate repeats of patterns in sparse binary data.
The model's capability is demonstrated by extracting structure in recordings from retinal neurons.
We apply our model to spiking responses recorded in retinal ganglion cells during stimulation with a movie.
arXiv Detail & Related papers (2022-01-26T18:37:35Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Dynamic Neural Diversification: Path to Computationally Sustainable
Neural Networks [68.8204255655161]
Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks.
We explore the diversity of the neurons within the hidden layer during the learning process.
We analyze how the diversity of the neurons affects predictions of the model.
arXiv Detail & Related papers (2021-09-20T15:12:16Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.