Online neural connectivity estimation with ensemble stimulation
- URL: http://arxiv.org/abs/2007.13911v2
- Date: Mon, 7 Dec 2020 16:26:54 GMT
- Title: Online neural connectivity estimation with ensemble stimulation
- Authors: Anne Draelos, Eva A. Naumann, John M. Pearson
- Abstract summary: We propose a method based on noisy group testing that drastically increases the efficiency of this process in sparse networks.
We show that it is possible to recover binarized network connectivity with a number of tests that grows only logarithmically with population size.
We also demonstrate the feasibility of inferring connectivity for networks of up to tens of thousands of neurons online.
- Score: 5.156484100374058
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the primary goals of systems neuroscience is to relate the structure
of neural circuits to their function, yet patterns of connectivity are
difficult to establish when recording from large populations in behaving
organisms. Many previous approaches have attempted to estimate functional
connectivity between neurons using statistical modeling of observational data,
but these approaches rely heavily on parametric assumptions and are purely
correlational. Recently, however, holographic photostimulation techniques have
made it possible to precisely target selected ensembles of neurons, offering
the possibility of establishing direct causal links. Here, we propose a method
based on noisy group testing that drastically increases the efficiency of this
process in sparse networks. By stimulating small ensembles of neurons, we show
that it is possible to recover binarized network connectivity with a number of
tests that grows only logarithmically with population size under minimal
statistical assumptions. Moreover, we prove that our approach, which reduces to
an efficiently solvable convex optimization problem, can be related to
Variational Bayesian inference on the binary connection weights, and we derive
rigorous bounds on the posterior marginals. This allows us to extend our method
to the streaming setting, where continuously updated posteriors allow for
optional stopping, and we demonstrate the feasibility of inferring connectivity
for networks of up to tens of thousands of neurons online. Finally, we show how
our work can be theoretically linked to compressed sensing approaches, and
compare results for connectivity inference in different settings.
Related papers
- Multilayer Multiset Neuronal Networks -- MMNNs [55.2480439325792]
The present work describes multilayer multiset neuronal networks incorporating two or more layers of coincidence similarity neurons.
The work also explores the utilization of counter-prototype points, which are assigned to the image regions to be avoided.
arXiv Detail & Related papers (2023-08-28T12:55:13Z) - Decorrelating neurons using persistence [29.25969187808722]
We present two regularisation terms computed from the weights of a minimum spanning tree of a clique.
We demonstrate that naive minimisation of all correlations between neurons obtains lower accuracies than our regularisation terms.
We include a proof of differentiability of our regularisers, thus developing the first effective topological persistence-based regularisation terms.
arXiv Detail & Related papers (2023-08-09T11:09:14Z) - Semantic Strengthening of Neuro-Symbolic Learning [85.6195120593625]
Neuro-symbolic approaches typically resort to fuzzy approximations of a probabilistic objective.
We show how to compute this efficiently for tractable circuits.
We test our approach on three tasks: predicting a minimum-cost path in Warcraft, predicting a minimum-cost perfect matching, and solving Sudoku puzzles.
arXiv Detail & Related papers (2023-02-28T00:04:22Z) - Zonotope Domains for Lagrangian Neural Network Verification [102.13346781220383]
We decompose the problem of verifying a deep neural network into the verification of many 2-layer neural networks.
Our technique yields bounds that improve upon both linear programming and Lagrangian-based verification techniques.
arXiv Detail & Related papers (2022-10-14T19:31:39Z) - Understanding Neural Coding on Latent Manifolds by Sharing Features and
Dividing Ensembles [3.625425081454343]
Systems neuroscience relies on two complementary views of neural data, characterized by single neuron tuning curves and analysis of population activity.
These two perspectives combine elegantly in neural latent variable models that constrain the relationship between latent variables and neural activity.
We propose feature sharing across neural tuning curves, which significantly improves performance and leads to better-behaved optimization.
arXiv Detail & Related papers (2022-10-06T18:37:49Z) - Neuronal Correlation: a Central Concept in Neural Network [22.764342635264452]
We show that neuronal correlation can be efficiently estimated via weight matrix.
We show that neuronal correlation significantly impacts on the accuracy of entropy estimation in high-dimensional hidden spaces.
arXiv Detail & Related papers (2022-01-22T15:01:50Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - And/or trade-off in artificial neurons: impact on adversarial robustness [91.3755431537592]
Presence of sufficient number of OR-like neurons in a network can lead to classification brittleness and increased vulnerability to adversarial attacks.
We define AND-like neurons and propose measures to increase their proportion in the network.
Experimental results on the MNIST dataset suggest that our approach holds promise as a direction for further exploration.
arXiv Detail & Related papers (2021-02-15T08:19:05Z) - Factorized Neural Processes for Neural Processes: $K$-Shot Prediction of
Neural Responses [9.792408261365043]
We develop a Factorized Neural Process to infer a neuron's tuning function from a small set of stimulus-response pairs.
We show on simulated responses that the predictions and reconstructed receptive fields from the Neural Process approach ground truth with increasing number of trials.
We believe this novel deep learning systems identification framework will facilitate better real-time integration of artificial neural network modeling into neuroscience experiments.
arXiv Detail & Related papers (2020-10-22T15:43:59Z) - Differentiable Causal Discovery from Interventional Data [141.41931444927184]
We propose a theoretically-grounded method based on neural networks that can leverage interventional data.
We show that our approach compares favorably to the state of the art in a variety of settings.
arXiv Detail & Related papers (2020-07-03T15:19:17Z) - Latent Network Structure Learning from High Dimensional Multivariate
Point Processes [5.079425170410857]
We propose a new class of nonstationary Hawkes processes to characterize the complex processes underlying the observed data.
We estimate the latent network structure using an efficient sparse least squares estimation approach.
We demonstrate the efficacy of our proposed method through simulation studies and an application to a neuron spike train data set.
arXiv Detail & Related papers (2020-04-07T17:48:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.