STNDT: Modeling Neural Population Activity with a Spatiotemporal
Transformer
- URL: http://arxiv.org/abs/2206.04727v1
- Date: Thu, 9 Jun 2022 18:54:23 GMT
- Title: STNDT: Modeling Neural Population Activity with a Spatiotemporal
Transformer
- Authors: Trung Le and Eli Shlizerman
- Abstract summary: We introduce SpatioTemporal Neural Data Transformer (STNDT), an NDT-based architecture that explicitly models responses of individual neurons.
We show that our model achieves state-of-the-art performance on ensemble level in estimating neural activities across four neural datasets.
- Score: 19.329190789275565
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modeling neural population dynamics underlying noisy single-trial spiking
activities is essential for relating neural observation and behavior. A recent
non-recurrent method - Neural Data Transformers (NDT) - has shown great success
in capturing neural dynamics with low inference latency without an explicit
dynamical model. However, NDT focuses on modeling the temporal evolution of the
population activity while neglecting the rich covariation between individual
neurons. In this paper we introduce SpatioTemporal Neural Data Transformer
(STNDT), an NDT-based architecture that explicitly models responses of
individual neurons in the population across time and space to uncover their
underlying firing rates. In addition, we propose a contrastive learning loss
that works in accordance with mask modeling objective to further improve the
predictive performance. We show that our model achieves state-of-the-art
performance on ensemble level in estimating neural activities across four
neural datasets, demonstrating its capability to capture autonomous and
non-autonomous dynamics spanning different cortical regions while being
completely agnostic to the specific behaviors at hand. Furthermore, STNDT
spatial attention mechanism reveals consistently important subsets of neurons
that play a vital role in driving the response of the entire population,
providing interpretability and key insights into how the population of neurons
performs computation.
Related papers
- Inferring stochastic low-rank recurrent neural networks from neural data [5.179844449042386]
A central aim in computational neuroscience is to relate the activity of large neurons to an underlying dynamical system.
Low-rank recurrent neural networks (RNNs) exhibit such interpretability by having tractable dynamics.
Here, we propose to fit low-rank RNNs with variational sequential Monte Carlo methods.
arXiv Detail & Related papers (2024-06-24T15:57:49Z) - Rethinking Spiking Neural Networks as State Space Models [1.9775291915550175]
Spiking neural networks (SNNs) are posited as a biologically plausible alternative to conventional neural architectures.
We present a novel class of spiking neuronal model grounded in state space models.
Our models attain state-of-the-art performance among SNN models across diverse long-range dependency tasks.
arXiv Detail & Related papers (2024-06-05T04:23:11Z) - Neuroformer: Multimodal and Multitask Generative Pretraining for Brain Data [3.46029409929709]
State-of-the-art systems neuroscience experiments yield large-scale multimodal data, and these data sets require new tools for analysis.
Inspired by the success of large pretrained models in vision and language domains, we reframe the analysis of large-scale, cellular-resolution neuronal spiking data into an autoregressive generation problem.
We first trained Neuroformer on simulated datasets, and found that it both accurately predicted intrinsically simulated neuronal circuit activity, and also inferred the underlying neural circuit connectivity, including direction.
arXiv Detail & Related papers (2023-10-31T20:17:32Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Mesoscopic modeling of hidden spiking neurons [3.6868085124383616]
We use coarse-graining and mean-field approximations to derive a bottom-up, neuronally-grounded latent variable model (neuLVM)
neuLVM can be explicitly mapped to a recurrent, multi-population spiking neural network (SNN)
We show, on synthetic spike trains, that a few observed neurons are sufficient for neuLVM to perform efficient model inversion of large SNNs.
arXiv Detail & Related papers (2022-05-26T17:04:39Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Deep inference of latent dynamics with spatio-temporal super-resolution
using selective backpropagation through time [15.648009434801885]
Modern neural interfaces allow access to the activity of up to a million neurons within brain circuits.
bandwidth limits often create a trade-off between greater spatial sampling (more channels or pixels) and frequency of temporal sampling.
Here we demonstrate that it is possible to obtain super-resolution in neuronal time series by exploiting relationships among neurons.
arXiv Detail & Related papers (2021-10-29T20:18:29Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - Recurrent Neural Network Learning of Performance and Intrinsic
Population Dynamics from Sparse Neural Data [77.92736596690297]
We introduce a novel training strategy that allows learning not only the input-output behavior of an RNN but also its internal network dynamics.
We test the proposed method by training an RNN to simultaneously reproduce internal dynamics and output signals of a physiologically-inspired neural model.
Remarkably, we show that the reproduction of the internal dynamics is successful even when the training algorithm relies on the activities of a small subset of neurons.
arXiv Detail & Related papers (2020-05-05T14:16:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.