Sequential change-point detection for mutually exciting point processes
over networks
- URL: http://arxiv.org/abs/2102.05724v1
- Date: Wed, 10 Feb 2021 20:20:06 GMT
- Title: Sequential change-point detection for mutually exciting point processes
over networks
- Authors: Haoyun Wang, Liyan Xie, Yao Xie, Alex Cuozzo, Simon Mak
- Abstract summary: We present a new CUSUM procedure for sequentially detecting change-point in the self and mutual exciting processes, a.k.a. Hawkes networks.
We show that the proposed CUSUM method achieves better performance than existing methods.
- Score: 11.672651073865538
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a new CUSUM procedure for sequentially detecting change-point in
the self and mutual exciting processes, a.k.a. Hawkes networks using discrete
events data. Hawkes networks have become a popular model for statistics and
machine learning due to their capability in modeling irregularly observed data
where the timing between events carries a lot of information. The problem of
detecting abrupt changes in Hawkes networks arises from various applications,
including neuronal imaging, sensor network, and social network monitoring.
Despite this, there has not been a computationally and memory-efficient online
algorithm for detecting such changes from sequential data. We present an
efficient online recursive implementation of the CUSUM statistic for Hawkes
processes, both decentralized and memory-efficient, and establish the
theoretical properties of this new CUSUM procedure. We then show that the
proposed CUSUM method achieves better performance than existing methods,
including the Shewhart procedure based on count data, the generalized
likelihood ratio (GLR) in the existing literature, and the standard score
statistic. We demonstrate this via a simulated example and an application to
population code change-detection in neuronal networks.
Related papers
- Deep Multi-Threshold Spiking-UNet for Image Processing [51.88730892920031]
This paper introduces the novel concept of Spiking-UNet for image processing, which combines the power of Spiking Neural Networks (SNNs) with the U-Net architecture.
To achieve an efficient Spiking-UNet, we face two primary challenges: ensuring high-fidelity information propagation through the network via spikes and formulating an effective training strategy.
Experimental results show that, on image segmentation and denoising, our Spiking-UNet achieves comparable performance to its non-spiking counterpart.
arXiv Detail & Related papers (2023-07-20T16:00:19Z) - Neural network-based CUSUM for online change-point detection [17.098858682219866]
We introduce a neural network CUSUM (NN-CUSUM) for online change-point detection.
We present a general theoretical condition when the trained neural networks can perform change-point detection.
The strong performance of NN-CUSUM is demonstrated in detecting change-point in high-dimensional data.
arXiv Detail & Related papers (2022-10-31T16:47:11Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - STING: Self-attention based Time-series Imputation Networks using GAN [4.052758394413726]
STING (Self-attention based Time-series Imputation Networks using GAN) is proposed.
We take advantage of generative adversarial networks and bidirectional recurrent neural networks to learn latent representations of the time series.
Experimental results on three real-world datasets demonstrate that STING outperforms the existing state-of-the-art methods in terms of imputation accuracy.
arXiv Detail & Related papers (2022-09-22T06:06:56Z) - The impact of memory on learning sequence-to-sequence tasks [6.603326895384289]
Recent success of neural networks in natural language processing has drawn renewed attention to learning sequence-to-sequence (seq2seq) tasks.
We propose a model for a seq2seq task that has the advantage of providing explicit control over the degree of memory, or non-Markovianity, in the sequences.
arXiv Detail & Related papers (2022-05-29T14:57:33Z) - Universal Transformer Hawkes Process with Adaptive Recursive Iteration [4.624987488467739]
Asynchronous events sequences are widely distributed in the natural world and human activities, such as earthquakes records, users activities in social media and so on.
How to distill the information from these seemingly disorganized data is a persistent topic that researchers focus on.
The one of the most useful model is the point process model, and on the basis, the researchers obtain many noticeable results.
In recent years, point process models on the foundation of neural networks, especially recurrent neural networks (RNN) are proposed and compare with the traditional models, their performance are greatly improved.
arXiv Detail & Related papers (2021-12-29T09:55:12Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - Mitigating Performance Saturation in Neural Marked Point Processes:
Architectures and Loss Functions [50.674773358075015]
We propose a simple graph-based network structure called GCHP, which utilizes only graph convolutional layers.
We show that GCHP can significantly reduce training time and the likelihood ratio loss with interarrival time probability assumptions can greatly improve the model performance.
arXiv Detail & Related papers (2021-07-07T16:59:14Z) - CREPO: An Open Repository to Benchmark Credal Network Algorithms [78.79752265884109]
Credal networks are imprecise probabilistic graphical models based on, so-called credal, sets of probability mass functions.
A Java library called CREMA has been recently released to model, process and query credal networks.
We present CREPO, an open repository of synthetic credal networks, provided together with the exact results of inference tasks on these models.
arXiv Detail & Related papers (2021-05-10T07:31:59Z) - Mutually exciting point process graphs for modelling dynamic networks [0.0]
A new class of models for dynamic networks is proposed, called mutually exciting point process graphs (MEG)
MEG is a scalable network-wide statistical model for point processes with dyadic marks, which can be used for anomaly detection.
The model is tested on simulated graphs and real world computer network datasets, demonstrating excellent performance.
arXiv Detail & Related papers (2021-02-11T10:14:55Z) - Continual Learning in Recurrent Neural Networks [67.05499844830231]
We evaluate the effectiveness of continual learning methods for processing sequential data with recurrent neural networks (RNNs)
We shed light on the particularities that arise when applying weight-importance methods, such as elastic weight consolidation, to RNNs.
We show that the performance of weight-importance methods is not directly affected by the length of the processed sequences, but rather by high working memory requirements.
arXiv Detail & Related papers (2020-06-22T10:05:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.