Spatio-temporal point processes with deep non-stationary kernels
- URL: http://arxiv.org/abs/2211.11179v1
- Date: Mon, 21 Nov 2022 04:49:39 GMT
- Title: Spatio-temporal point processes with deep non-stationary kernels
- Authors: Zheng Dong, Xiuyuan Cheng, Yao Xie
- Abstract summary: We develop a new deep non-stationary influence kernel that can model non-stationary-temporal point processes.
The main idea is to approximate the influence kernel with a novel and general low-rank decomposition.
We also take a new approach to maintain the non-negativity constraint of the conditional intensity by introducing a log-barrier penalty.
- Score: 18.10670233156497
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Point process data are becoming ubiquitous in modern applications, such as
social networks, health care, and finance. Despite the powerful expressiveness
of the popular recurrent neural network (RNN) models for point process data,
they may not successfully capture sophisticated non-stationary dependencies in
the data due to their recurrent structures. Another popular type of deep model
for point process data is based on representing the influence kernel (rather
than the intensity function) by neural networks. We take the latter approach
and develop a new deep non-stationary influence kernel that can model
non-stationary spatio-temporal point processes. The main idea is to approximate
the influence kernel with a novel and general low-rank decomposition, enabling
efficient representation through deep neural networks and computational
efficiency and better performance. We also take a new approach to maintain the
non-negativity constraint of the conditional intensity by introducing a
log-barrier penalty. We demonstrate our proposed method's good performance and
computational efficiency compared with the state-of-the-art on simulated and
real data.
Related papers
- Efficient and Effective Implicit Dynamic Graph Neural Network [42.49148111696576]
We present Implicit Dynamic Graph Neural Network (IDGNN) a novel implicit neural network for dynamic graphs.
A key characteristic of IDGNN is that it demonstrably is well-posed, i.e., it is theoretically guaranteed to have a fixed-point representation.
arXiv Detail & Related papers (2024-06-25T19:07:21Z) - Accelerating Convolutional Neural Network Pruning via Spatial Aura
Entropy [0.0]
pruning is a popular technique to reduce the computational complexity and memory footprint of Convolutional Neural Network (CNN) models.
Existing methods for MI computation suffer from high computational cost and sensitivity to noise, leading to suboptimal pruning performance.
We propose a novel method to improve MI computation for CNN pruning, using the spatial aura entropy.
arXiv Detail & Related papers (2023-12-08T09:43:49Z) - Deep graph kernel point processes [19.382241594513374]
This paper presents a novel point process model for discrete event data over graphs, where the event interaction occurs within a latent graph structure.
The key idea is to represent the influence kernel by Graph Neural Networks (GNN) to capture the underlying graph structure.
Compared with prior works focusing on directly modeling the conditional intensity function using neural networks, our kernel presentation herds the repeated event influence patterns more effectively.
arXiv Detail & Related papers (2023-06-20T06:15:19Z) - FaDIn: Fast Discretized Inference for Hawkes Processes with General
Parametric Kernels [82.53569355337586]
This work offers an efficient solution to temporal point processes inference using general parametric kernels with finite support.
The method's effectiveness is evaluated by modeling the occurrence of stimuli-induced patterns from brain signals recorded with magnetoencephalography (MEG)
Results show that the proposed approach leads to an improved estimation of pattern latency than the state-of-the-art.
arXiv Detail & Related papers (2022-10-10T12:35:02Z) - DeepBayes -- an estimator for parameter estimation in stochastic
nonlinear dynamical models [11.917949887615567]
We propose DeepBayes estimators that leverage the power of deep recurrent neural networks in learning an estimator.
The deep recurrent neural network architectures can be trained offline and ensure significant time savings during inference.
We demonstrate the applicability of our proposed method on different example models and perform detailed comparisons with state-of-the-art approaches.
arXiv Detail & Related papers (2022-05-04T18:12:17Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - Neural Point Process for Learning Spatiotemporal Event Dynamics [21.43984242938217]
We propose a deep dynamics model that integratestemporal point processes.
Our method is flexible, efficient and can accurately forecast irregularly sampled events over space and time.
On real-world benchmarks, our model demonstrates superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2021-12-12T23:17:33Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - Mitigating Performance Saturation in Neural Marked Point Processes:
Architectures and Loss Functions [50.674773358075015]
We propose a simple graph-based network structure called GCHP, which utilizes only graph convolutional layers.
We show that GCHP can significantly reduce training time and the likelihood ratio loss with interarrival time probability assumptions can greatly improve the model performance.
arXiv Detail & Related papers (2021-07-07T16:59:14Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.