Learning Time-Invariant Representations for Individual Neurons from
Population Dynamics
- URL: http://arxiv.org/abs/2311.02258v1
- Date: Fri, 3 Nov 2023 22:30:12 GMT
- Title: Learning Time-Invariant Representations for Individual Neurons from
Population Dynamics
- Authors: Lu Mi, Trung Le, Tianxing He, Eli Shlizerman, Uygar S\"umb\"ul
- Abstract summary: We propose a self-supervised learning based method to assign time-invariant representations to individual neurons.
We fit dynamical models to neuronal activity to learn a representation by considering the activity of both the individual and the neighboring population.
We demonstrate our method on a public multimodal dataset of mouse cortical neuronal activity and transcriptomic labels.
- Score: 29.936569965875375
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neurons can display highly variable dynamics. While such variability
presumably supports the wide range of behaviors generated by the organism,
their gene expressions are relatively stable in the adult brain. This suggests
that neuronal activity is a combination of its time-invariant identity and the
inputs the neuron receives from the rest of the circuit. Here, we propose a
self-supervised learning based method to assign time-invariant representations
to individual neurons based on permutation-, and population size-invariant
summary of population recordings. We fit dynamical models to neuronal activity
to learn a representation by considering the activity of both the individual
and the neighboring population. Our self-supervised approach and use of
implicit representations enable robust inference against imperfections such as
partial overlap of neurons across sessions, trial-to-trial variability, and
limited availability of molecular (transcriptomic) labels for downstream
supervised tasks. We demonstrate our method on a public multimodal dataset of
mouse cortical neuronal activity and transcriptomic labels. We report > 35%
improvement in predicting the transcriptomic subclass identity and > 20%
improvement in predicting class identity with respect to the state-of-the-art.
Related papers
- SynapsNet: Enhancing Neuronal Population Dynamics Modeling via Learning Functional Connectivity [0.0]
We introduce SynapsNet, a novel deep-learning framework that effectively models population dynamics and functional interactions between neurons.
A shared decoder uses the input current, previous neuronal activity, neuron embedding, and behavioral data to predict the population activity in the next time step.
Our experiments, conducted on mouse cortical activity from publicly available datasets, demonstrate that SynapsNet consistently outperforms existing models in forecasting population activity.
arXiv Detail & Related papers (2024-11-12T22:25:15Z) - Modeling dynamic neural activity by combining naturalistic video stimuli and stimulus-independent latent factors [5.967290675400836]
We propose a probabilistic model that incorporates video inputs along with stimulus-independent latent factors to capture variability in neuronal responses.
After training and testing our model on mouse V1 neuronal responses, we found that it outperforms video-only models in terms of log-likelihood.
We find that the learned latent factors strongly correlate with mouse behavior, although the model was trained without behavior data.
arXiv Detail & Related papers (2024-10-21T16:01:39Z) - STNDT: Modeling Neural Population Activity with a Spatiotemporal
Transformer [19.329190789275565]
We introduce SpatioTemporal Neural Data Transformer (STNDT), an NDT-based architecture that explicitly models responses of individual neurons.
We show that our model achieves state-of-the-art performance on ensemble level in estimating neural activities across four neural datasets.
arXiv Detail & Related papers (2022-06-09T18:54:23Z) - Capturing cross-session neural population variability through
self-supervised identification of consistent neuron ensembles [1.2617078020344619]
We show that self-supervised training of a deep neural network can be used to compensate for inter-session variability.
A sequential autoencoding model can maintain state-of-the-art behaviour decoding performance for completely unseen recording sessions several days into the future.
arXiv Detail & Related papers (2022-05-19T20:00:33Z) - Self-Supervised Graph Representation Learning for Neuronal Morphologies [75.38832711445421]
We present GraphDINO, a data-driven approach to learn low-dimensional representations of 3D neuronal morphologies from unlabeled datasets.
We show, in two different species and across multiple brain areas, that this method yields morphological cell type clusterings on par with manual feature-based classification by experts.
Our method could potentially enable data-driven discovery of novel morphological features and cell types in large-scale datasets.
arXiv Detail & Related papers (2021-12-23T12:17:47Z) - Overcoming the Domain Gap in Neural Action Representations [60.47807856873544]
3D pose data can now be reliably extracted from multi-view video sequences without manual intervention.
We propose to use it to guide the encoding of neural action representations together with a set of neural and behavioral augmentations.
To reduce the domain gap, during training, we swap neural and behavioral data across animals that seem to be performing similar actions.
arXiv Detail & Related papers (2021-12-02T12:45:46Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Drop, Swap, and Generate: A Self-Supervised Approach for Generating
Neural Activity [33.06823702945747]
We introduce a novel unsupervised approach for learning disentangled representations of neural activity called Swap-VAE.
Our approach combines a generative modeling framework with an instance-specific alignment loss.
We show that it is possible to build representations that disentangle neural datasets along relevant latent dimensions linked to behavior.
arXiv Detail & Related papers (2021-11-03T16:39:43Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z) - And/or trade-off in artificial neurons: impact on adversarial robustness [91.3755431537592]
Presence of sufficient number of OR-like neurons in a network can lead to classification brittleness and increased vulnerability to adversarial attacks.
We define AND-like neurons and propose measures to increase their proportion in the network.
Experimental results on the MNIST dataset suggest that our approach holds promise as a direction for further exploration.
arXiv Detail & Related papers (2021-02-15T08:19:05Z) - Compositional Explanations of Neurons [52.71742655312625]
We describe a procedure for explaining neurons in deep representations by identifying compositional logical concepts.
We use this procedure to answer several questions on interpretability in models for vision and natural language processing.
arXiv Detail & Related papers (2020-06-24T20:37:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.