A neural network based model for multi-dimensional nonlinear Hawkes
processes
- URL: http://arxiv.org/abs/2303.03073v1
- Date: Mon, 6 Mar 2023 12:31:19 GMT
- Title: A neural network based model for multi-dimensional nonlinear Hawkes
processes
- Authors: Sobin Joseph and Shashi Jain
- Abstract summary: We introduce the Neural Network for Descent Hawkes processes (NNNH), a non-parametric method based on neural networks to fit nonlinear Hawkes processes.
Our results highlight the effectiveness of the NNNH method in accurately capturing the complexities of nonlinear Hawkes processes.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper introduces the Neural Network for Nonlinear Hawkes processes
(NNNH), a non-parametric method based on neural networks to fit nonlinear
Hawkes processes. Our method is suitable for analyzing large datasets in which
events exhibit both mutually-exciting and inhibitive patterns. The NNNH
approach models the individual kernels and the base intensity of the nonlinear
Hawkes process using feed forward neural networks and jointly calibrates the
parameters of the networks by maximizing the log-likelihood function. We
utilize Stochastic Gradient Descent to search for the optimal parameters and
propose an unbiased estimator for the gradient, as well as an efficient
computation method. We demonstrate the flexibility and accuracy of our method
through numerical experiments on both simulated and real-world data, and
compare it with state-of-the-art methods. Our results highlight the
effectiveness of the NNNH method in accurately capturing the complexities of
nonlinear Hawkes processes.
Related papers
- DiffHybrid-UQ: Uncertainty Quantification for Differentiable Hybrid
Neural Modeling [4.76185521514135]
We introduce a novel method, DiffHybrid-UQ, for effective and efficient uncertainty propagation and estimation in hybrid neural differentiable models.
Specifically, our approach effectively discerns and quantifies both aleatoric uncertainties, arising from data noise, and epistemic uncertainties, resulting from model-form discrepancies and data sparsity.
arXiv Detail & Related papers (2023-12-30T07:40:47Z) - The Convex Landscape of Neural Networks: Characterizing Global Optima
and Stationary Points via Lasso Models [75.33431791218302]
Deep Neural Network Network (DNN) models are used for programming purposes.
In this paper we examine the use of convex neural recovery models.
We show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
We also show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
arXiv Detail & Related papers (2023-12-19T23:04:56Z) - SimPINNs: Simulation-Driven Physics-Informed Neural Networks for
Enhanced Performance in Nonlinear Inverse Problems [0.0]
This paper introduces a novel approach to solve inverse problems by leveraging deep learning techniques.
The objective is to infer unknown parameters that govern a physical system based on observed data.
arXiv Detail & Related papers (2023-09-27T06:34:55Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Scalable and adaptive variational Bayes methods for Hawkes processes [4.580983642743026]
We propose a novel sparsity-inducing procedure, and derive an adaptive mean-field variational algorithm for the popular sigmoid Hawkes processes.
Our algorithm is parallelisable and therefore computationally efficient in high-dimensional setting.
arXiv Detail & Related papers (2022-12-01T05:35:32Z) - FaDIn: Fast Discretized Inference for Hawkes Processes with General
Parametric Kernels [82.53569355337586]
This work offers an efficient solution to temporal point processes inference using general parametric kernels with finite support.
The method's effectiveness is evaluated by modeling the occurrence of stimuli-induced patterns from brain signals recorded with magnetoencephalography (MEG)
Results show that the proposed approach leads to an improved estimation of pattern latency than the state-of-the-art.
arXiv Detail & Related papers (2022-10-10T12:35:02Z) - Inverse Problem of Nonlinear Schr\"odinger Equation as Learning of
Convolutional Neural Network [5.676923179244324]
It is shown that one can obtain a relatively accurate estimate of the considered parameters using the proposed method.
It provides a natural framework in inverse problems of partial differential equations with deep learning.
arXiv Detail & Related papers (2021-07-19T02:54:37Z) - Going Beyond Linear RL: Sample Efficient Neural Function Approximation [76.57464214864756]
We study function approximation with two-layer neural networks.
Our results significantly improve upon what can be attained with linear (or eluder dimension) methods.
arXiv Detail & Related papers (2021-07-14T03:03:56Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Shallow Neural Hawkes: Non-parametric kernel estimation for Hawkes
processes [0.0]
Multi-dimensional Hawkes process (MHP) is a class of self and mutually exciting point processes.
We first find an unbiased estimator for the log-likelihood estimator of the Hawkes process.
We propose a specific single layered neural network for the non-parametric estimation of the underlying kernels.
arXiv Detail & Related papers (2020-06-03T18:15:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.