Fitting summary statistics of neural data with a differentiable spiking
network simulator
- URL: http://arxiv.org/abs/2106.10064v1
- Date: Fri, 18 Jun 2021 11:21:30 GMT
- Title: Fitting summary statistics of neural data with a differentiable spiking
network simulator
- Authors: Guillaume Bellec, Shuqi Wang, Alireza Modirshanechi, Johanni Brea,
Wulfram Gerstner
- Abstract summary: A popular approach is to model a brain area with a probabilistic recurrent spiking network whose parameters maximize the likelihood of the recorded activity.
We show that the resulting model does not produce realistic neural activity.
We suggest to augment the log-likelihood with terms that measure the dissimilarity between simulated and recorded activity.
- Score: 4.987315310656657
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fitting network models to neural activity is becoming an important tool in
neuroscience. A popular approach is to model a brain area with a probabilistic
recurrent spiking network whose parameters maximize the likelihood of the
recorded activity. Although this is widely used, we show that the resulting
model does not produce realistic neural activity and wrongly estimates the
connectivity matrix when neurons that are not recorded have a substantial
impact on the recorded network. To correct for this, we suggest to augment the
log-likelihood with terms that measure the dissimilarity between simulated and
recorded activity. This dissimilarity is defined via summary statistics
commonly used in neuroscience, and the optimization is efficient because it
relies on back-propagation through the stochastically simulated spike trains.
We analyze this method theoretically and show empirically that it generates
more realistic activity statistics and recovers the connectivity matrix better
than other methods.
Related papers
- Inferring stochastic low-rank recurrent neural networks from neural data [5.179844449042386]
A central aim in computational neuroscience is to relate the activity of large neurons to an underlying dynamical system.
Low-rank recurrent neural networks (RNNs) exhibit such interpretability by having tractable dynamics.
Here, we propose to fit low-rank RNNs with variational sequential Monte Carlo methods.
arXiv Detail & Related papers (2024-06-24T15:57:49Z) - Neuroformer: Multimodal and Multitask Generative Pretraining for Brain Data [3.46029409929709]
State-of-the-art systems neuroscience experiments yield large-scale multimodal data, and these data sets require new tools for analysis.
Inspired by the success of large pretrained models in vision and language domains, we reframe the analysis of large-scale, cellular-resolution neuronal spiking data into an autoregressive generation problem.
We first trained Neuroformer on simulated datasets, and found that it both accurately predicted intrinsically simulated neuronal circuit activity, and also inferred the underlying neural circuit connectivity, including direction.
arXiv Detail & Related papers (2023-10-31T20:17:32Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - SpikiLi: A Spiking Simulation of LiDAR based Real-time Object Detection
for Autonomous Driving [0.0]
Spiking Neural Networks are a new neural network design approach that promises tremendous improvements in power efficiency, computation efficiency, and processing latency.
We first illustrate the applicability of spiking neural networks to a complex deep learning task namely Lidar based 3D object detection for automated driving.
arXiv Detail & Related papers (2022-06-06T20:05:17Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - And/or trade-off in artificial neurons: impact on adversarial robustness [91.3755431537592]
Presence of sufficient number of OR-like neurons in a network can lead to classification brittleness and increased vulnerability to adversarial attacks.
We define AND-like neurons and propose measures to increase their proportion in the network.
Experimental results on the MNIST dataset suggest that our approach holds promise as a direction for further exploration.
arXiv Detail & Related papers (2021-02-15T08:19:05Z) - Online neural connectivity estimation with ensemble stimulation [5.156484100374058]
We propose a method based on noisy group testing that drastically increases the efficiency of this process in sparse networks.
We show that it is possible to recover binarized network connectivity with a number of tests that grows only logarithmically with population size.
We also demonstrate the feasibility of inferring connectivity for networks of up to tens of thousands of neurons online.
arXiv Detail & Related papers (2020-07-27T23:47:03Z) - Efficient Inference of Flexible Interaction in Spiking-neuron Networks [41.83710212492543]
We use the nonlinear Hawkes process to model excitatory or inhibitory interactions among neurons.
We show our algorithm can estimate the temporal dynamics of interaction and reveal the interpretable functional connectivity underlying neural spike trains.
arXiv Detail & Related papers (2020-06-23T09:10:30Z) - Understanding the Effects of Data Parallelism and Sparsity on Neural
Network Training [126.49572353148262]
We study two factors in neural network training: data parallelism and sparsity.
Despite their promising benefits, understanding of their effects on neural network training remains elusive.
arXiv Detail & Related papers (2020-03-25T10:49:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.