Optimal Estimation of Generic Dynamics by Path-Dependent Neural Jump ODEs
- URL: http://arxiv.org/abs/2206.14284v6
- Date: Thu, 4 Jul 2024 16:02:36 GMT
- Title: Optimal Estimation of Generic Dynamics by Path-Dependent Neural Jump ODEs
- Authors: Florian Krach, Marc NĂ¼bel, Josef Teichmann,
- Abstract summary: This paper studies the problem of forecasting general processes using a path-dependent extension of the Neural Jump ODE (NJ-ODE) framework citepherreraneural.
We show that PD-NJ-ODE can be applied successfully to classical filtering problems and to limit order book (LOB) data.
- Score: 3.74142789780782
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper studies the problem of forecasting general stochastic processes using a path-dependent extension of the Neural Jump ODE (NJ-ODE) framework \citep{herrera2021neural}. While NJ-ODE was the first framework to establish convergence guarantees for the prediction of irregularly observed time series, these results were limited to data stemming from It\^o-diffusions with complete observations, in particular Markov processes, where all coordinates are observed simultaneously. In this work, we generalise these results to generic, possibly non-Markovian or discontinuous, stochastic processes with incomplete observations, by utilising the reconstruction properties of the signature transform. These theoretical results are supported by empirical studies, where it is shown that the path-dependent NJ-ODE outperforms the original NJ-ODE framework in the case of non-Markovian data. Moreover, we show that PD-NJ-ODE can be applied successfully to classical stochastic filtering problems and to limit order book (LOB) data.
Related papers
- Latent Space Energy-based Neural ODEs [73.01344439786524]
This paper introduces a novel family of deep dynamical models designed to represent continuous-time sequence data.
We train the model using maximum likelihood estimation with Markov chain Monte Carlo.
Experiments on oscillating systems, videos and real-world state sequences (MuJoCo) illustrate that ODEs with the learnable energy-based prior outperform existing counterparts.
arXiv Detail & Related papers (2024-09-05T18:14:22Z) - Learning Chaotic Systems and Long-Term Predictions with Neural Jump ODEs [4.204990010424083]
Pathdependent Neural Jump ODE (PDNJ-ODE) is a model for online prediction of generic processes with irregular (in time) and potentially incomplete (with respect to coordinates) observations.
In this work we enhance the model with two novel ideas, which independently of each other improve the performance of our modelling setup.
The same enhancements can be used to provably enable the PDNJ-ODE to learn long-term predictions for general datasets, where the standard model fails.
arXiv Detail & Related papers (2024-07-26T15:18:29Z) - Video Anomaly Detection via Spatio-Temporal Pseudo-Anomaly Generation : A Unified Approach [49.995833831087175]
This work proposes a novel method for generating generic Video-temporal PAs by inpainting a masked out region of an image.
In addition, we present a simple unified framework to detect real-world anomalies under the OCC setting.
Our method performs on par with other existing state-of-the-art PAs generation and reconstruction based methods under the OCC setting.
arXiv Detail & Related papers (2023-11-27T13:14:06Z) - A PAC-Bayesian Perspective on the Interpolating Information Criterion [54.548058449535155]
We show how a PAC-Bayes bound is obtained for a general class of models, characterizing factors which influence performance in the interpolating regime.
We quantify how the test error for overparameterized models achieving effectively zero training error depends on the quality of the implicit regularization imposed by e.g. the combination of model, parameter-initialization scheme.
arXiv Detail & Related papers (2023-11-13T01:48:08Z) - Data-driven Modeling and Inference for Bayesian Gaussian Process ODEs
via Double Normalizing Flows [28.62579476863723]
We introduce normalizing flows to re parameterize the ODE vector field, resulting in a data-driven prior distribution.
We also apply normalizing flows to the posterior inference of GP ODEs to resolve the issue of strong mean-field assumptions.
We validate the effectiveness of our approach on simulated dynamical systems and real-world human motion data.
arXiv Detail & Related papers (2023-09-17T09:28:47Z) - Provably Convergent Schr\"odinger Bridge with Applications to
Probabilistic Time Series Imputation [21.27702285561771]
We present a first convergence analysis of the Schr"odinger bridge algorithm based on approximated projections.
As for its practical applications, we apply SBP to probabilistic time series imputation by generating missing values conditioned on observed data.
arXiv Detail & Related papers (2023-05-12T04:39:01Z) - Reflected Diffusion Models [93.26107023470979]
We present Reflected Diffusion Models, which reverse a reflected differential equation evolving on the support of the data.
Our approach learns the score function through a generalized score matching loss and extends key components of standard diffusion models.
arXiv Detail & Related papers (2023-04-10T17:54:38Z) - State and parameter learning with PaRIS particle Gibbs [11.290331898505594]
Non-linear state-space models are ubiquitous in statistical machine learning.
PaRIS is a sequential Monte Carlo technique allowing for efficient online approximation of expectations of additive functionals.
We design a novel additive smoothing algorithm, the Parisian particle Gibbs PPG sampler, which can be viewed as a PaRIS algorithm driven by conditional SMC moves.
arXiv Detail & Related papers (2023-01-02T23:27:33Z) - Invariance Principle Meets Out-of-Distribution Generalization on Graphs [66.04137805277632]
Complex nature of graphs thwarts the adoption of the invariance principle for OOD generalization.
domain or environment partitions, which are often required by OOD methods, can be expensive to obtain for graphs.
We propose a novel framework to explicitly model this process using a contrastive strategy.
arXiv Detail & Related papers (2022-02-11T04:38:39Z) - CASTLE: Regularization via Auxiliary Causal Graph Discovery [89.74800176981842]
We introduce Causal Structure Learning (CASTLE) regularization and propose to regularize a neural network by jointly learning the causal relationships between variables.
CASTLE efficiently reconstructs only the features in the causal DAG that have a causal neighbor, whereas reconstruction-based regularizers suboptimally reconstruct all input features.
arXiv Detail & Related papers (2020-09-28T09:49:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.