Inferring Population Dynamics in Macaque Cortex
- URL: http://arxiv.org/abs/2304.06040v2
- Date: Thu, 19 Oct 2023 19:56:31 GMT
- Title: Inferring Population Dynamics in Macaque Cortex
- Authors: Ganga Meghanath, Bryan Jimenez, Joseph G. Makin
- Abstract summary: We show that simple, general-purpose architectures based on recurrent neural networks (RNNs) outperform more "bespoke" models.
We argue that the autoregressive bias imposed by RNNs is critical for achieving the highest levels of performance.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The proliferation of multi-unit cortical recordings over the last two
decades, especially in macaques and during motor-control tasks, has generated
interest in neural "population dynamics": the time evolution of neural activity
across a group of neurons working together. A good model of these dynamics
should be able to infer the activity of unobserved neurons within the same
population and of the observed neurons at future times. Accordingly,
Pandarinath and colleagues have introduced a benchmark to evaluate models on
these two (and related) criteria: four data sets, each consisting of firing
rates from a population of neurons, recorded from macaque cortex during
movement-related tasks. Here we show that simple, general-purpose architectures
based on recurrent neural networks (RNNs) outperform more "bespoke" models, and
indeed outperform all published models on all four data sets in the benchmark.
Performance can be improved further still with a novel, hybrid architecture
that augments the RNN with self-attention, as in transformer networks. But pure
transformer models fail to achieve this level of performance, either in our
work or that of other groups. We argue that the autoregressive bias imposed by
RNNs is critical for achieving the highest levels of performance. We conclude,
however, by proposing that the benchmark be augmented with an alternative
evaluation of latent dynamics that favors generative over discriminative models
like the ones we propose in this report.
Related papers
- SynapsNet: Enhancing Neuronal Population Dynamics Modeling via Learning Functional Connectivity [0.0]
We introduce SynapsNet, a novel deep-learning framework that effectively models population dynamics and functional interactions between neurons.
A shared decoder uses the input current, previous neuronal activity, neuron embedding, and behavioral data to predict the population activity in the next time step.
Our experiments, conducted on mouse cortical activity from publicly available datasets, demonstrate that SynapsNet consistently outperforms existing models in forecasting population activity.
arXiv Detail & Related papers (2024-11-12T22:25:15Z) - Inferring stochastic low-rank recurrent neural networks from neural data [5.179844449042386]
A central aim in computational neuroscience is to relate the activity of large neurons to an underlying dynamical system.
Low-rank recurrent neural networks (RNNs) exhibit such interpretability by having tractable dynamics.
Here, we propose to fit low-rank RNNs with variational sequential Monte Carlo methods.
arXiv Detail & Related papers (2024-06-24T15:57:49Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Cross-Model Comparative Loss for Enhancing Neuronal Utility in Language
Understanding [82.46024259137823]
We propose a cross-model comparative loss for a broad range of tasks.
We demonstrate the universal effectiveness of comparative loss through extensive experiments on 14 datasets from 3 distinct NLU tasks.
arXiv Detail & Related papers (2023-01-10T03:04:27Z) - STNDT: Modeling Neural Population Activity with a Spatiotemporal
Transformer [19.329190789275565]
We introduce SpatioTemporal Neural Data Transformer (STNDT), an NDT-based architecture that explicitly models responses of individual neurons.
We show that our model achieves state-of-the-art performance on ensemble level in estimating neural activities across four neural datasets.
arXiv Detail & Related papers (2022-06-09T18:54:23Z) - Representation learning for neural population activity with Neural Data
Transformers [3.4376560669160394]
We introduce the Neural Data Transformer (NDT), a non-recurrent alternative to explicit dynamics models.
NDT enables 3.9ms inference, well within the loop time of real-time applications and more than 6 times faster than recurrent baselines.
arXiv Detail & Related papers (2021-08-02T23:36:39Z) - Neuroevolution of a Recurrent Neural Network for Spatial and Working
Memory in a Simulated Robotic Environment [57.91534223695695]
We evolved weights in a biologically plausible recurrent neural network (RNN) using an evolutionary algorithm to replicate the behavior and neural activity observed in rats.
Our method demonstrates how the dynamic activity in evolved RNNs can capture interesting and complex cognitive behavior.
arXiv Detail & Related papers (2021-02-25T02:13:52Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - Recurrent Neural Network Learning of Performance and Intrinsic
Population Dynamics from Sparse Neural Data [77.92736596690297]
We introduce a novel training strategy that allows learning not only the input-output behavior of an RNN but also its internal network dynamics.
We test the proposed method by training an RNN to simultaneously reproduce internal dynamics and output signals of a physiologically-inspired neural model.
Remarkably, we show that the reproduction of the internal dynamics is successful even when the training algorithm relies on the activities of a small subset of neurons.
arXiv Detail & Related papers (2020-05-05T14:16:54Z) - Neural Additive Models: Interpretable Machine Learning with Neural Nets [77.66871378302774]
Deep neural networks (DNNs) are powerful black-box predictors that have achieved impressive performance on a wide variety of tasks.
We propose Neural Additive Models (NAMs) which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models.
NAMs learn a linear combination of neural networks that each attend to a single input feature.
arXiv Detail & Related papers (2020-04-29T01:28:32Z) - Investigation and Analysis of Hyper and Hypo neuron pruning to
selectively update neurons during Unsupervised Adaptation [8.845660219190298]
Pruning approaches look for low-salient neurons that are less contributive to a model's decision.
This work investigates if pruning approaches are successful in detecting neurons that are either high-salient (mostly active or hyper) or low-salient (barely active or hypo)
It shows that it may be possible to selectively adapt certain neurons (consisting of the hyper and the hypo neurons) first, followed by a full-network fine tuning.
arXiv Detail & Related papers (2020-01-06T19:46:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.