Efficient Inference of Flexible Interaction in Spiking-neuron Networks
- URL: http://arxiv.org/abs/2006.12845v2
- Date: Sat, 20 Feb 2021 14:27:53 GMT
- Title: Efficient Inference of Flexible Interaction in Spiking-neuron Networks
- Authors: Feng Zhou, Yixuan Zhang, Jun Zhu
- Abstract summary: We use the nonlinear Hawkes process to model excitatory or inhibitory interactions among neurons.
We show our algorithm can estimate the temporal dynamics of interaction and reveal the interpretable functional connectivity underlying neural spike trains.
- Score: 41.83710212492543
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hawkes process provides an effective statistical framework for analyzing the
time-dependent interaction of neuronal spiking activities. Although utilized in
many real applications, the classic Hawkes process is incapable of modelling
inhibitory interactions among neurons. Instead, the nonlinear Hawkes process
allows for a more flexible influence pattern with excitatory or inhibitory
interactions. In this paper, three sets of auxiliary latent variables
(P\'{o}lya-Gamma variables, latent marked Poisson processes and sparsity
variables) are augmented to make functional connection weights in a Gaussian
form, which allows for a simple iterative algorithm with analytical updates. As
a result, an efficient expectation-maximization (EM) algorithm is derived to
obtain the maximum a posteriori (MAP) estimate. We demonstrate the accuracy and
efficiency performance of our algorithm on synthetic and real data. For real
neural recordings, we show our algorithm can estimate the temporal dynamics of
interaction and reveal the interpretable functional connectivity underlying
neural spike trains.
Related papers
- On the Trade-off Between Efficiency and Precision of Neural Abstraction [62.046646433536104]
Neural abstractions have been recently introduced as formal approximations of complex, nonlinear dynamical models.
We employ formal inductive synthesis procedures to generate neural abstractions that result in dynamical models with these semantics.
arXiv Detail & Related papers (2023-07-28T13:22:32Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Neuronal architecture extracts statistical temporal patterns [1.9662978733004601]
We show how higher-order temporal (co-)fluctuations can be employed to represent and process information.
A simple biologically inspired feedforward neuronal model is able to extract information from up to the third order cumulant to perform time series classification.
arXiv Detail & Related papers (2023-01-24T18:21:33Z) - Learnable latent embeddings for joint behavioral and neural analysis [3.6062449190184136]
We show that CEBRA can be used for the mapping of space, uncovering complex kinematic features, and rapid, high-accuracy decoding of natural movies from visual cortex.
We validate its accuracy and demonstrate its utility for both calcium and electrophysiology datasets, across sensory and motor tasks, and in simple or complex behaviors across species.
arXiv Detail & Related papers (2022-04-01T19:19:33Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Improving Phenotype Prediction using Long-Range Spatio-Temporal Dynamics
of Functional Connectivity [9.015698823470899]
We present an approach to model functional brain connectivity across space and time.
We use the Human Connectome Project dataset on sex classification and fluid intelligence prediction.
Results show a prediction accuracy of 94.4% for sex, and an improvement of correlation with fluid intelligence of 0.325 vs 0.144, relative to a baseline model that encodes space and time separately.
arXiv Detail & Related papers (2021-09-07T14:23:34Z) - Fitting summary statistics of neural data with a differentiable spiking
network simulator [4.987315310656657]
A popular approach is to model a brain area with a probabilistic recurrent spiking network whose parameters maximize the likelihood of the recorded activity.
We show that the resulting model does not produce realistic neural activity.
We suggest to augment the log-likelihood with terms that measure the dissimilarity between simulated and recorded activity.
arXiv Detail & Related papers (2021-06-18T11:21:30Z) - Towards Interaction Detection Using Topological Analysis on Neural
Networks [55.74562391439507]
In neural networks, any interacting features must follow a strongly weighted connection to common hidden units.
We propose a new measure for quantifying interaction strength, based upon the well-received theory of persistent homology.
A Persistence Interaction detection(PID) algorithm is developed to efficiently detect interactions.
arXiv Detail & Related papers (2020-10-25T02:15:24Z) - UNIPoint: Universally Approximating Point Processes Intensities [125.08205865536577]
We provide a proof that a class of learnable functions can universally approximate any valid intensity function.
We implement UNIPoint, a novel neural point process model, using recurrent neural networks to parameterise sums of basis function upon each event.
arXiv Detail & Related papers (2020-07-28T09:31:56Z) - Latent Network Structure Learning from High Dimensional Multivariate
Point Processes [5.079425170410857]
We propose a new class of nonstationary Hawkes processes to characterize the complex processes underlying the observed data.
We estimate the latent network structure using an efficient sparse least squares estimation approach.
We demonstrate the efficacy of our proposed method through simulation studies and an application to a neuron spike train data set.
arXiv Detail & Related papers (2020-04-07T17:48:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.