Neuronal Learning Analysis using Cycle-Consistent Adversarial Networks
- URL: http://arxiv.org/abs/2111.13073v1
- Date: Thu, 25 Nov 2021 13:24:19 GMT
- Title: Neuronal Learning Analysis using Cycle-Consistent Adversarial Networks
- Authors: Bryan M. Li, Theoklitos Amvrosiadis, Nathalie Rochefort, Arno Onken
- Abstract summary: We use a variant of deep generative models called - CycleGAN, to learn the unknown mapping between pre- and post-learning neural activities.
We develop an end-to-end pipeline to preprocess, train and evaluate calcium fluorescence signals, and a procedure to interpret the resulting deep learning models.
- Score: 4.874780144224057
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding how activity in neural circuits reshapes following task
learning could reveal fundamental mechanisms of learning. Thanks to the recent
advances in neural imaging technologies, high-quality recordings can be
obtained from hundreds of neurons over multiple days or even weeks. However,
the complexity and dimensionality of population responses pose significant
challenges for analysis. Existing methods of studying neuronal adaptation and
learning often impose strong assumptions on the data or model, resulting in
biased descriptions that do not generalize. In this work, we use a variant of
deep generative models called - CycleGAN, to learn the unknown mapping between
pre- and post-learning neural activities recorded $\textit{in vivo}$. We
develop an end-to-end pipeline to preprocess, train and evaluate calcium
fluorescence signals, and a procedure to interpret the resulting deep learning
models. To assess the validity of our method, we first test our framework on a
synthetic dataset with known ground-truth transformation. Subsequently, we
applied our method to neural activities recorded from the primary visual cortex
of behaving mice, where the mice transition from novice to expert-level
performance in a visual-based virtual reality experiment. We evaluate model
performance on generated calcium signals and their inferred spike trains. To
maximize performance, we derive a novel approach to pre-sort neurons such that
convolutional-based networks can take advantage of the spatial information that
exists in neural activities. In addition, we incorporate visual explanation
methods to improve the interpretability of our work and gain insights into the
learning process as manifested in the cellular activities. Together, our
results demonstrate that analyzing neuronal learning processes with data-driven
deep unsupervised methods holds the potential to unravel changes in an unbiased
way.
Related papers
- Deep Learning Through A Telescoping Lens: A Simple Model Provides Empirical Insights On Grokking, Gradient Boosting & Beyond [61.18736646013446]
In pursuit of a deeper understanding of its surprising behaviors, we investigate the utility of a simple yet accurate model of a trained neural network.
Across three case studies, we illustrate how it can be applied to derive new empirical insights on a diverse range of prominent phenomena.
arXiv Detail & Related papers (2024-10-31T22:54:34Z) - Simple and Effective Transfer Learning for Neuro-Symbolic Integration [50.592338727912946]
A potential solution to this issue is Neuro-Symbolic Integration (NeSy), where neural approaches are combined with symbolic reasoning.
Most of these methods exploit a neural network to map perceptions to symbols and a logical reasoner to predict the output of the downstream task.
They suffer from several issues, including slow convergence, learning difficulties with complex perception tasks, and convergence to local minima.
This paper proposes a simple yet effective method to ameliorate these problems.
arXiv Detail & Related papers (2024-02-21T15:51:01Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Neuroformer: Multimodal and Multitask Generative Pretraining for Brain Data [3.46029409929709]
State-of-the-art systems neuroscience experiments yield large-scale multimodal data, and these data sets require new tools for analysis.
Inspired by the success of large pretrained models in vision and language domains, we reframe the analysis of large-scale, cellular-resolution neuronal spiking data into an autoregressive generation problem.
We first trained Neuroformer on simulated datasets, and found that it both accurately predicted intrinsically simulated neuronal circuit activity, and also inferred the underlying neural circuit connectivity, including direction.
arXiv Detail & Related papers (2023-10-31T20:17:32Z) - Understanding Activation Patterns in Artificial Neural Networks by
Exploring Stochastic Processes [0.0]
We propose utilizing the framework of processes, which has been underutilized thus far.
We focus solely on activation frequency, leveraging neuroscience techniques used for real neuron spike trains.
We derive parameters describing activation patterns in each network, revealing consistent differences across architectures and training sets.
arXiv Detail & Related papers (2023-08-01T22:12:30Z) - Interpretable statistical representations of neural population dynamics and geometry [4.459704414303749]
We introduce a representation learning method, MARBLE, that decomposes on-manifold dynamics into local flow fields and maps them into a common latent space.
In simulated non-linear dynamical systems, recurrent neural networks, and experimental single-neuron recordings from primates and rodents, we discover emergent low-dimensional latent representations.
These representations are consistent across neural networks and animals, enabling the robust comparison of cognitive computations.
arXiv Detail & Related papers (2023-04-06T21:11:04Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Dynamic Neural Diversification: Path to Computationally Sustainable
Neural Networks [68.8204255655161]
Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks.
We explore the diversity of the neurons within the hidden layer during the learning process.
We analyze how the diversity of the neurons affects predictions of the model.
arXiv Detail & Related papers (2021-09-20T15:12:16Z) - On the Evolution of Neuron Communities in a Deep Learning Architecture [0.7106986689736827]
This paper examines the neuron activation patterns of deep learning-based classification models.
We show that both the community quality (modularity) and entropy are closely related to the deep learning models' performances.
arXiv Detail & Related papers (2021-06-08T21:09:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.