Statistical Inference for Networks of High-Dimensional Point Processes
- URL: http://arxiv.org/abs/2007.07448v1
- Date: Wed, 15 Jul 2020 02:46:36 GMT
- Title: Statistical Inference for Networks of High-Dimensional Point Processes
- Authors: Xu Wang, Mladen Kolar and Ali Shojaie
- Abstract summary: We develop a new statistical inference procedure for high-dimensional Hawkes processes.
The key ingredient for this inference procedure is a new concentration inequality on the first- and second-order statistics.
We demonstrate their utility by applying them to a neuron spike train data set.
- Score: 19.38934705817528
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fueled in part by recent applications in neuroscience, the multivariate
Hawkes process has become a popular tool for modeling the network of
interactions among high-dimensional point process data. While evaluating the
uncertainty of the network estimates is critical in scientific applications,
existing methodological and theoretical work has primarily addressed
estimation. To bridge this gap, this paper develops a new statistical inference
procedure for high-dimensional Hawkes processes. The key ingredient for this
inference procedure is a new concentration inequality on the first- and
second-order statistics for integrated stochastic processes, which summarize
the entire history of the process. Combining recent results on martingale
central limit theory with the new concentration inequality, we then
characterize the convergence rate of the test statistics. We illustrate finite
sample validity of our inferential tools via extensive simulations and
demonstrate their utility by applying them to a neuron spike train data set.
Related papers
- Amortised Inference in Bayesian Neural Networks [0.0]
We introduce the Amortised Pseudo-Observation Variational Inference Bayesian Neural Network (APOVI-BNN)
We show that the amortised inference is of similar or better quality to those obtained through traditional variational inference.
We then discuss how the APOVI-BNN may be viewed as a new member of the neural process family.
arXiv Detail & Related papers (2023-09-06T14:02:33Z) - MARS: Meta-Learning as Score Matching in the Function Space [79.73213540203389]
We present a novel approach to extracting inductive biases from a set of related datasets.
We use functional Bayesian neural network inference, which views the prior as a process and performs inference in the function space.
Our approach can seamlessly acquire and represent complex prior knowledge by metalearning the score function of the data-generating process.
arXiv Detail & Related papers (2022-10-24T15:14:26Z) - FaDIn: Fast Discretized Inference for Hawkes Processes with General
Parametric Kernels [82.53569355337586]
This work offers an efficient solution to temporal point processes inference using general parametric kernels with finite support.
The method's effectiveness is evaluated by modeling the occurrence of stimuli-induced patterns from brain signals recorded with magnetoencephalography (MEG)
Results show that the proposed approach leads to an improved estimation of pattern latency than the state-of-the-art.
arXiv Detail & Related papers (2022-10-10T12:35:02Z) - ARISE: ApeRIodic SEmi-parametric Process for Efficient Markets without
Periodogram and Gaussianity Assumptions [91.3755431537592]
We present the ApeRI-miodic (ARISE) process for investigating efficient markets.
The ARISE process is formulated as an infinite-sum of some known processes and employs the aperiodic spectrum estimation.
In practice, we apply the ARISE function to identify the efficiency of real-world markets.
arXiv Detail & Related papers (2021-11-08T03:36:06Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - Statistical learning and cross-validation for point processes [0.9281671380673306]
This paper presents the first general (parametric) statistical learning framework for point processes in general spaces.
The general idea is to carry out the fitting by predicting CV-generated validation sets using the corresponding training sets.
We numerically show that our statistical learning approach outperforms the state of the art in terms of mean (integrated) squared error.
arXiv Detail & Related papers (2021-03-01T23:47:48Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Uncertainty Quantification for Inferring Hawkes Networks [13.283258096829146]
We develop a statistical inference framework to learn causal relationships between nodes from networked data.
We provide uncertainty quantification for the maximum likelihood estimate of the network Hawkes process.
arXiv Detail & Related papers (2020-06-12T23:08:36Z) - rTop-k: A Statistical Estimation Approach to Distributed SGD [5.197307534263253]
We show that top-k and random-k sparsification methods consistently and significantly outperforms either method applied alone.
We propose a simple statistical estimation model for gradients which captures the sparsity and statistically optimal communication scheme.
We show through extensive experiments on both image and language domains with CIFAR-10, ImageNet, and Penn Treebank datasets that the skewd application of these two sparsification methods consistently and significantly outperforms either method applied alone.
arXiv Detail & Related papers (2020-05-21T16:27:46Z) - Latent Network Structure Learning from High Dimensional Multivariate
Point Processes [5.079425170410857]
We propose a new class of nonstationary Hawkes processes to characterize the complex processes underlying the observed data.
We estimate the latent network structure using an efficient sparse least squares estimation approach.
We demonstrate the efficacy of our proposed method through simulation studies and an application to a neuron spike train data set.
arXiv Detail & Related papers (2020-04-07T17:48:01Z) - A Multi-Channel Neural Graphical Event Model with Negative Evidence [76.51278722190607]
Event datasets are sequences of events of various types occurring irregularly over the time-line.
We propose a non-parametric deep neural network approach in order to estimate the underlying intensity functions.
arXiv Detail & Related papers (2020-02-21T23:10:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.