Latent Network Structure Learning from High Dimensional Multivariate
Point Processes
- URL: http://arxiv.org/abs/2004.03569v2
- Date: Tue, 19 Jan 2021 20:07:22 GMT
- Title: Latent Network Structure Learning from High Dimensional Multivariate
Point Processes
- Authors: Biao Cai, Jingfei Zhang, Yongtao Guan
- Abstract summary: We propose a new class of nonstationary Hawkes processes to characterize the complex processes underlying the observed data.
We estimate the latent network structure using an efficient sparse least squares estimation approach.
We demonstrate the efficacy of our proposed method through simulation studies and an application to a neuron spike train data set.
- Score: 5.079425170410857
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning the latent network structure from large scale multivariate point
process data is an important task in a wide range of scientific and business
applications. For instance, we might wish to estimate the neuronal functional
connectivity network based on spiking times recorded from a collection of
neurons. To characterize the complex processes underlying the observed data, we
propose a new and flexible class of nonstationary Hawkes processes that allow
both excitatory and inhibitory effects. We estimate the latent network
structure using an efficient sparse least squares estimation approach. Using a
thinning representation, we establish concentration inequalities for the first
and second order statistics of the proposed Hawkes process. Such theoretical
results enable us to establish the non-asymptotic error bound and the selection
consistency of the estimated parameters. Furthermore, we describe a least
squares loss based statistic for testing if the background intensity is
constant in time. We demonstrate the efficacy of our proposed method through
simulation studies and an application to a neuron spike train data set.
Related papers
- Small Contributions, Small Networks: Efficient Neural Network Pruning Based on Relative Importance [25.579863542008646]
We introduce an intuitive and interpretable pruning method based on activation statistics.
We build a distribution of weight contributions across the dataset and utilize its parameters to guide the pruning process.
Our method consistently outperforms several baseline and state-of-the-art pruning techniques.
arXiv Detail & Related papers (2024-10-21T16:18:31Z) - Decentralized Learning Strategies for Estimation Error Minimization with Graph Neural Networks [94.2860766709971]
We address the challenge of sampling and remote estimation for autoregressive Markovian processes in a wireless network with statistically-identical agents.
Our goal is to minimize time-average estimation error and/or age of information with decentralized scalable sampling and transmission policies.
arXiv Detail & Related papers (2024-04-04T06:24:11Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - Dynamic Neural Diversification: Path to Computationally Sustainable
Neural Networks [68.8204255655161]
Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks.
We explore the diversity of the neurons within the hidden layer during the learning process.
We analyze how the diversity of the neurons affects predictions of the model.
arXiv Detail & Related papers (2021-09-20T15:12:16Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - Mitigating Performance Saturation in Neural Marked Point Processes:
Architectures and Loss Functions [50.674773358075015]
We propose a simple graph-based network structure called GCHP, which utilizes only graph convolutional layers.
We show that GCHP can significantly reduce training time and the likelihood ratio loss with interarrival time probability assumptions can greatly improve the model performance.
arXiv Detail & Related papers (2021-07-07T16:59:14Z) - And/or trade-off in artificial neurons: impact on adversarial robustness [91.3755431537592]
Presence of sufficient number of OR-like neurons in a network can lead to classification brittleness and increased vulnerability to adversarial attacks.
We define AND-like neurons and propose measures to increase their proportion in the network.
Experimental results on the MNIST dataset suggest that our approach holds promise as a direction for further exploration.
arXiv Detail & Related papers (2021-02-15T08:19:05Z) - Estimation of the Mean Function of Functional Data via Deep Neural
Networks [6.230751621285321]
We propose a deep neural network method to perform nonparametric regression for functional data.
The proposed method is applied to analyze positron emission tomography images of patients with Alzheimer disease.
arXiv Detail & Related papers (2020-12-08T17:18:16Z) - Online neural connectivity estimation with ensemble stimulation [5.156484100374058]
We propose a method based on noisy group testing that drastically increases the efficiency of this process in sparse networks.
We show that it is possible to recover binarized network connectivity with a number of tests that grows only logarithmically with population size.
We also demonstrate the feasibility of inferring connectivity for networks of up to tens of thousands of neurons online.
arXiv Detail & Related papers (2020-07-27T23:47:03Z) - Statistical Inference for Networks of High-Dimensional Point Processes [19.38934705817528]
We develop a new statistical inference procedure for high-dimensional Hawkes processes.
The key ingredient for this inference procedure is a new concentration inequality on the first- and second-order statistics.
We demonstrate their utility by applying them to a neuron spike train data set.
arXiv Detail & Related papers (2020-07-15T02:46:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.