UNIPoint: Universally Approximating Point Processes Intensities
- URL: http://arxiv.org/abs/2007.14082v4
- Date: Wed, 3 Mar 2021 01:07:53 GMT
- Title: UNIPoint: Universally Approximating Point Processes Intensities
- Authors: Alexander Soen, Alexander Mathews, Daniel Grixti-Cheng, Lexing Xie
- Abstract summary: We provide a proof that a class of learnable functions can universally approximate any valid intensity function.
We implement UNIPoint, a novel neural point process model, using recurrent neural networks to parameterise sums of basis function upon each event.
- Score: 125.08205865536577
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Point processes are a useful mathematical tool for describing events over
time, and so there are many recent approaches for representing and learning
them. One notable open question is how to precisely describe the flexibility of
point process models and whether there exists a general model that can
represent all point processes. Our work bridges this gap. Focusing on the
widely used event intensity function representation of point processes, we
provide a proof that a class of learnable functions can universally approximate
any valid intensity function. The proof connects the well known
Stone-Weierstrass Theorem for function approximation, the uniform density of
non-negative continuous functions using a transfer functions, the formulation
of the parameters of a piece-wise continuous functions as a dynamic system, and
a recurrent neural network implementation for capturing the dynamics. Using
these insights, we design and implement UNIPoint, a novel neural point process
model, using recurrent neural networks to parameterise sums of basis function
upon each event. Evaluations on synthetic and real world datasets show that
this simpler representation performs better than Hawkes process variants and
more complex neural network-based approaches. We expect this result will
provide a practical basis for selecting and tuning models, as well as
furthering theoretical work on representational complexity and learnability.
Related papers
- Approximation of RKHS Functionals by Neural Networks [30.42446856477086]
We study the approximation of functionals on kernel reproducing Hilbert spaces (RKHS's) using neural networks.
We derive explicit error bounds for those induced by inverse multiquadric, Gaussian, and Sobolev kernels.
We apply our findings to functional regression, proving that neural networks can accurately approximate the regression maps.
arXiv Detail & Related papers (2024-03-18T18:58:23Z) - Going Beyond Neural Network Feature Similarity: The Network Feature
Complexity and Its Interpretation Using Category Theory [64.06519549649495]
We provide the definition of what we call functionally equivalent features.
These features produce equivalent output under certain transformations.
We propose an efficient algorithm named Iterative Feature Merging.
arXiv Detail & Related papers (2023-10-10T16:27:12Z) - Versatile Neural Processes for Learning Implicit Neural Representations [57.090658265140384]
We propose Versatile Neural Processes (VNP), which largely increases the capability of approximating functions.
Specifically, we introduce a bottleneck encoder that produces fewer and informative context tokens, relieving the high computational cost.
We demonstrate the effectiveness of the proposed VNP on a variety of tasks involving 1D, 2D and 3D signals.
arXiv Detail & Related papers (2023-01-21T04:08:46Z) - NeuralEF: Deconstructing Kernels by Deep Neural Networks [47.54733625351363]
Traditional nonparametric solutions based on the Nystr"om formula suffer from scalability issues.
Recent work has resorted to a parametric approach, i.e., training neural networks to approximate the eigenfunctions.
We show that these problems can be fixed by using a new series of objective functions that generalizes to space of supervised and unsupervised learning problems.
arXiv Detail & Related papers (2022-04-30T05:31:07Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - Shallow Representation is Deep: Learning Uncertainty-aware and
Worst-case Random Feature Dynamics [1.1470070927586016]
This paper views uncertain system models as unknown or uncertain smooth functions in universal kernel Hilbert spaces.
By directly approximating the one-step dynamics function using random features with uncertain parameters, we then view the whole dynamical system as a multi-layer neural network.
arXiv Detail & Related papers (2021-06-24T14:48:12Z) - On Function Approximation in Reinforcement Learning: Optimism in the
Face of Large State Spaces [208.67848059021915]
We study the exploration-exploitation tradeoff at the core of reinforcement learning.
In particular, we prove that the complexity of the function class $mathcalF$ characterizes the complexity of the function.
Our regret bounds are independent of the number of episodes.
arXiv Detail & Related papers (2020-11-09T18:32:22Z) - Estimating Multiplicative Relations in Neural Networks [0.0]
We will use properties of logarithmic functions to propose a pair of activation functions which can translate products into linear expression and learn using backpropagation.
We will try to generalize this approach for some complex arithmetic functions and test the accuracy on a disjoint distribution with the training set.
arXiv Detail & Related papers (2020-10-28T14:28:24Z) - Deep Learning with Functional Inputs [0.0]
We present a methodology for integrating functional data into feed-forward neural networks.
A by-product of the method is a set of dynamic functional weights that can be visualized during the optimization process.
The model is shown to perform well in a number of contexts including prediction of new data and recovery of the true underlying functional weights.
arXiv Detail & Related papers (2020-06-17T01:23:00Z) - Formal Synthesis of Lyapunov Neural Networks [61.79595926825511]
We propose an automatic and formally sound method for synthesising Lyapunov functions.
We employ a counterexample-guided approach where a numerical learner and a symbolic verifier interact to construct provably correct Lyapunov neural networks.
Our method synthesises Lyapunov functions faster and over wider spatial domains than the alternatives, yet providing stronger or equal guarantees.
arXiv Detail & Related papers (2020-03-19T17:21:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.