A Unified, Scalable Framework for Neural Population Decoding
- URL: http://arxiv.org/abs/2310.16046v1
- Date: Tue, 24 Oct 2023 17:58:26 GMT
- Title: A Unified, Scalable Framework for Neural Population Decoding
- Authors: Mehdi Azabou, Vinam Arora, Venkataramana Ganesh, Ximeng Mao, Santosh
Nachimuthu, Michael J. Mendelson, Blake Richards, Matthew G. Perich,
Guillaume Lajoie, Eva L. Dyer
- Abstract summary: We introduce a training framework and architecture designed to model the population dynamics of neural activity.
We construct a large-scale multi-session model trained on large datasets from seven nonhuman primates.
- Score: 12.052847252465826
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Our ability to use deep learning approaches to decipher neural activity would
likely benefit from greater scale, in terms of both model size and datasets.
However, the integration of many neural recordings into one unified model is
challenging, as each recording contains the activity of different neurons from
different individual animals. In this paper, we introduce a training framework
and architecture designed to model the population dynamics of neural activity
across diverse, large-scale neural recordings. Our method first tokenizes
individual spikes within the dataset to build an efficient representation of
neural events that captures the fine temporal structure of neural activity. We
then employ cross-attention and a PerceiverIO backbone to further construct a
latent tokenization of neural population activities. Utilizing this
architecture and training framework, we construct a large-scale multi-session
model trained on large datasets from seven nonhuman primates, spanning over 158
different sessions of recording from over 27,373 neural units and over 100
hours of recordings. In a number of different tasks, we demonstrate that our
pretrained model can be rapidly adapted to new, unseen sessions with
unspecified neuron correspondence, enabling few-shot performance with minimal
labels. This work presents a powerful new approach for building deep learning
tools to analyze neural data and stakes out a clear path to training at scale.
Related papers
- Meta-Dynamical State Space Models for Integrative Neural Data Analysis [8.625491800829224]
Learning shared structure across environments facilitates rapid learning and adaptive behavior in neural systems.
There has been limited work exploiting the shared structure in neural activity during similar tasks for learning latent dynamics from neural recordings.
We propose a novel approach for meta-learning this solution space from task-related neural activity of trained animals.
arXiv Detail & Related papers (2024-10-07T19:35:49Z) - Towards a "universal translator" for neural dynamics at single-cell, single-spike resolution [10.49121904052395]
We build towards a first foundation model for neural spiking data that can solve a diverse set of tasks across multiple brain areas.
Prediction tasks include single-neuron and region-level activity prediction, forward prediction, and behavior decoding.
arXiv Detail & Related papers (2024-07-19T21:05:28Z) - Neuroformer: Multimodal and Multitask Generative Pretraining for Brain Data [3.46029409929709]
State-of-the-art systems neuroscience experiments yield large-scale multimodal data, and these data sets require new tools for analysis.
Inspired by the success of large pretrained models in vision and language domains, we reframe the analysis of large-scale, cellular-resolution neuronal spiking data into an autoregressive generation problem.
We first trained Neuroformer on simulated datasets, and found that it both accurately predicted intrinsically simulated neuronal circuit activity, and also inferred the underlying neural circuit connectivity, including direction.
arXiv Detail & Related papers (2023-10-31T20:17:32Z) - Capturing cross-session neural population variability through
self-supervised identification of consistent neuron ensembles [1.2617078020344619]
We show that self-supervised training of a deep neural network can be used to compensate for inter-session variability.
A sequential autoencoding model can maintain state-of-the-art behaviour decoding performance for completely unseen recording sessions several days into the future.
arXiv Detail & Related papers (2022-05-19T20:00:33Z) - Overcoming the Domain Gap in Neural Action Representations [60.47807856873544]
3D pose data can now be reliably extracted from multi-view video sequences without manual intervention.
We propose to use it to guide the encoding of neural action representations together with a set of neural and behavioral augmentations.
To reduce the domain gap, during training, we swap neural and behavioral data across animals that seem to be performing similar actions.
arXiv Detail & Related papers (2021-12-02T12:45:46Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Dynamic Neural Diversification: Path to Computationally Sustainable
Neural Networks [68.8204255655161]
Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks.
We explore the diversity of the neurons within the hidden layer during the learning process.
We analyze how the diversity of the neurons affects predictions of the model.
arXiv Detail & Related papers (2021-09-20T15:12:16Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - Reservoir Memory Machines as Neural Computers [70.5993855765376]
Differentiable neural computers extend artificial neural networks with an explicit memory without interference.
We achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently.
arXiv Detail & Related papers (2020-09-14T12:01:30Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.