Understanding Neural Coding on Latent Manifolds by Sharing Features and
Dividing Ensembles
- URL: http://arxiv.org/abs/2210.03155v1
- Date: Thu, 6 Oct 2022 18:37:49 GMT
- Title: Understanding Neural Coding on Latent Manifolds by Sharing Features and
Dividing Ensembles
- Authors: Martin Bjerke, Lukas Schott, Kristopher T. Jensen, Claudia Battistin,
David A. Klindt, Benjamin A. Dunn
- Abstract summary: Systems neuroscience relies on two complementary views of neural data, characterized by single neuron tuning curves and analysis of population activity.
These two perspectives combine elegantly in neural latent variable models that constrain the relationship between latent variables and neural activity.
We propose feature sharing across neural tuning curves, which significantly improves performance and leads to better-behaved optimization.
- Score: 3.625425081454343
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Systems neuroscience relies on two complementary views of neural data,
characterized by single neuron tuning curves and analysis of population
activity. These two perspectives combine elegantly in neural latent variable
models that constrain the relationship between latent variables and neural
activity, modeled by simple tuning curve functions. This has recently been
demonstrated using Gaussian processes, with applications to realistic and
topologically relevant latent manifolds. Those and previous models, however,
missed crucial shared coding properties of neural populations. We propose
feature sharing across neural tuning curves, which significantly improves
performance and leads to better-behaved optimization. We also propose a
solution to the problem of ensemble detection, whereby different groups of
neurons, i.e., ensembles, can be modulated by different latent manifolds. This
is achieved through a soft clustering of neurons during training, thus allowing
for the separation of mixed neural populations in an unsupervised manner. These
innovations lead to more interpretable models of neural population activity
that train well and perform better even on mixtures of complex latent
manifolds. Finally, we apply our method on a recently published grid cell
dataset, recovering distinct ensembles, inferring toroidal latents and
predicting neural tuning curves all in a single integrated modeling framework.
Related papers
- Inferring stochastic low-rank recurrent neural networks from neural data [5.179844449042386]
A central aim in computational neuroscience is to relate the activity of large neurons to an underlying dynamical system.
Low-rank recurrent neural networks (RNNs) exhibit such interpretability by having tractable dynamics.
Here, we propose to fit low-rank RNNs with variational sequential Monte Carlo methods.
arXiv Detail & Related papers (2024-06-24T15:57:49Z) - Latent Variable Sequence Identification for Cognitive Models with Neural Bayes Estimation [7.7227297059345466]
We present an approach that extends neural Bayes estimation to learn a direct mapping between experimental data and the targeted latent variable space.
Our work underscores that combining recurrent neural networks and simulation-based inference to identify latent variable sequences can enable researchers to access a wider class of cognitive models.
arXiv Detail & Related papers (2024-06-20T21:13:39Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Neuroformer: Multimodal and Multitask Generative Pretraining for Brain Data [3.46029409929709]
State-of-the-art systems neuroscience experiments yield large-scale multimodal data, and these data sets require new tools for analysis.
Inspired by the success of large pretrained models in vision and language domains, we reframe the analysis of large-scale, cellular-resolution neuronal spiking data into an autoregressive generation problem.
We first trained Neuroformer on simulated datasets, and found that it both accurately predicted intrinsically simulated neuronal circuit activity, and also inferred the underlying neural circuit connectivity, including direction.
arXiv Detail & Related papers (2023-10-31T20:17:32Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Dynamic Neural Diversification: Path to Computationally Sustainable
Neural Networks [68.8204255655161]
Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks.
We explore the diversity of the neurons within the hidden layer during the learning process.
We analyze how the diversity of the neurons affects predictions of the model.
arXiv Detail & Related papers (2021-09-20T15:12:16Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - A new inference approach for training shallow and deep generalized
linear models of noisy interacting neurons [4.899818550820575]
We develop a two-step inference strategy that allows us to train robust generalized linear models of interacting neurons.
We show that, compared to classical methods, the models trained in this way exhibit improved performance.
The method can be extended to deep convolutional neural networks, leading to models with high predictive accuracy for both the neuron firing rates and their correlations.
arXiv Detail & Related papers (2020-06-11T15:09:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.