Manifold GPLVMs for discovering non-Euclidean latent structure in neural
data
- URL: http://arxiv.org/abs/2006.07429v2
- Date: Wed, 21 Oct 2020 15:06:53 GMT
- Title: Manifold GPLVMs for discovering non-Euclidean latent structure in neural
data
- Authors: Kristopher T. Jensen, Ta-Chu Kao, Marco Tripodi, and Guillaume
Hennequin
- Abstract summary: A common problem in neuroscience is to elucidate the collective neural representations of behaviorally important variables.
Here, we propose a new probabilistic latent variable model to simultaneously identify the latent state and the way each neuron contributes to its representation.
- Score: 5.949779668853555
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A common problem in neuroscience is to elucidate the collective neural
representations of behaviorally important variables such as head direction,
spatial location, upcoming movements, or mental spatial transformations. Often,
these latent variables are internal constructs not directly accessible to the
experimenter. Here, we propose a new probabilistic latent variable model to
simultaneously identify the latent state and the way each neuron contributes to
its representation in an unsupervised way. In contrast to previous models which
assume Euclidean latent spaces, we embrace the fact that latent states often
belong to symmetric manifolds such as spheres, tori, or rotation groups of
various dimensions. We therefore propose the manifold Gaussian process latent
variable model (mGPLVM), where neural responses arise from (i) a shared latent
variable living on a specific manifold, and (ii) a set of non-parametric tuning
curves determining how each neuron contributes to the representation.
Cross-validated comparisons of models with different topologies can be used to
distinguish between candidate manifolds, and variational inference enables
quantification of uncertainty. We demonstrate the validity of the approach on
several synthetic datasets, as well as on calcium recordings from the ellipsoid
body of Drosophila melanogaster and extracellular recordings from the mouse
anterodorsal thalamic nucleus. These circuits are both known to encode head
direction, and mGPLVM correctly recovers the ring topology expected from neural
populations representing a single angular variable.
Related papers
- Don't Cut Corners: Exact Conditions for Modularity in Biologically Inspired Representations [52.48094670415497]
We develop a theory of when biologically inspired representations modularise with respect to source variables (sources)
We derive necessary and sufficient conditions on a sample of sources that determine whether the neurons in an optimal biologically-inspired linear autoencoder modularise.
Our theory applies to any dataset, extending far beyond the case of statistical independence studied in previous work.
arXiv Detail & Related papers (2024-10-08T17:41:37Z) - Unsupervised discovery of the shared and private geometry in multi-view data [1.8816600430294537]
We develop a nonlinear neural network-based method that disentangles low-dimensional shared and private latent variables.
We demonstrate our model's ability to discover interpretable shared and private structure across different noise conditions.
Applying our method to simultaneous Neuropixels recordings of hippocampus and prefrontal cortex while mice run on a linear track, we discover a low-dimensional shared latent space that encodes the animal's position.
arXiv Detail & Related papers (2024-08-22T03:00:21Z) - Latent Variable Sequence Identification for Cognitive Models with Neural Bayes Estimation [7.7227297059345466]
We present an approach that extends neural Bayes estimation to learn a direct mapping between experimental data and the targeted latent variable space.
Our work underscores that combining recurrent neural networks and simulation-based inference to identify latent variable sequences can enable researchers to access a wider class of cognitive models.
arXiv Detail & Related papers (2024-06-20T21:13:39Z) - Multi-modal Gaussian Process Variational Autoencoders for Neural and
Behavioral Data [0.9622208190558754]
We propose an unsupervised latent variable model which extracts temporally evolving shared and independent latents for distinct, simultaneously recorded experimental modalities.
We validate our model on simulated multi-modal data consisting of Poisson spike counts and MNIST images that scale and rotate smoothly over time.
We show that the multi-modal GP-VAE is able to not only identify the shared and independent latent structure across modalities accurately, but provides good reconstructions of both images and neural rates on held-out trials.
arXiv Detail & Related papers (2023-10-04T19:04:55Z) - Geometric Neural Diffusion Processes [55.891428654434634]
We extend the framework of diffusion models to incorporate a series of geometric priors in infinite-dimension modelling.
We show that with these conditions, the generative functional model admits the same symmetry.
arXiv Detail & Related papers (2023-07-11T16:51:38Z) - Understanding Neural Coding on Latent Manifolds by Sharing Features and
Dividing Ensembles [3.625425081454343]
Systems neuroscience relies on two complementary views of neural data, characterized by single neuron tuning curves and analysis of population activity.
These two perspectives combine elegantly in neural latent variable models that constrain the relationship between latent variables and neural activity.
We propose feature sharing across neural tuning curves, which significantly improves performance and leads to better-behaved optimization.
arXiv Detail & Related papers (2022-10-06T18:37:49Z) - Equivariance Allows Handling Multiple Nuisance Variables When Analyzing
Pooled Neuroimaging Datasets [53.34152466646884]
In this paper, we show how bringing recent results on equivariant representation learning instantiated on structured spaces together with simple use of classical results on causal inference provides an effective practical solution.
We demonstrate how our model allows dealing with more than one nuisance variable under some assumptions and can enable analysis of pooled scientific datasets in scenarios that would otherwise entail removing a large portion of the samples.
arXiv Detail & Related papers (2022-03-29T04:54:06Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - Modeling Shared Responses in Neuroimaging Studies through MultiView ICA [94.31804763196116]
Group studies involving large cohorts of subjects are important to draw general conclusions about brain functional organization.
We propose a novel MultiView Independent Component Analysis model for group studies, where data from each subject are modeled as a linear combination of shared independent sources plus noise.
We demonstrate the usefulness of our approach first on fMRI data, where our model demonstrates improved sensitivity in identifying common sources among subjects.
arXiv Detail & Related papers (2020-06-11T17:29:53Z) - Unifying and generalizing models of neural dynamics during
decision-making [27.46508483610472]
We propose a unifying framework for modeling neural activity during decision-making tasks.
The framework includes the canonical drift-diffusion model and enables extensions such as multi-dimensional accumulators, variable and collapsing boundaries, and discrete jumps.
arXiv Detail & Related papers (2020-01-13T23:57:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.