Optimal Estimation of Generic Dynamics by Path-Dependent Neural Jump ODEs
- URL: http://arxiv.org/abs/2206.14284v6
- Date: Thu, 4 Jul 2024 16:02:36 GMT
- Title: Optimal Estimation of Generic Dynamics by Path-Dependent Neural Jump ODEs
- Authors: Florian Krach, Marc NĂ¼bel, Josef Teichmann,
- Abstract summary: This paper studies the problem of forecasting general processes using a path-dependent extension of the Neural Jump ODE (NJ-ODE) framework citepherreraneural.
We show that PD-NJ-ODE can be applied successfully to classical filtering problems and to limit order book (LOB) data.
- Score: 3.74142789780782
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper studies the problem of forecasting general stochastic processes using a path-dependent extension of the Neural Jump ODE (NJ-ODE) framework \citep{herrera2021neural}. While NJ-ODE was the first framework to establish convergence guarantees for the prediction of irregularly observed time series, these results were limited to data stemming from It\^o-diffusions with complete observations, in particular Markov processes, where all coordinates are observed simultaneously. In this work, we generalise these results to generic, possibly non-Markovian or discontinuous, stochastic processes with incomplete observations, by utilising the reconstruction properties of the signature transform. These theoretical results are supported by empirical studies, where it is shown that the path-dependent NJ-ODE outperforms the original NJ-ODE framework in the case of non-Markovian data. Moreover, we show that PD-NJ-ODE can be applied successfully to classical stochastic filtering problems and to limit order book (LOB) data.
Related papers
- Nonparametric Filtering, Estimation and Classification using Neural Jump ODEs [3.437372707846067]
Neural Jump ODEs model the conditional expectation between observations by neural ODEs and jump at arrival of new observations.
They have demonstrated effectiveness for fully data-driven online forecasting in settings with irregular and partial observations.
This work extends the framework to input-output systems, enabling direct applications in online filtering and classification.
arXiv Detail & Related papers (2024-12-04T12:31:15Z) - On the Trajectory Regularity of ODE-based Diffusion Sampling [79.17334230868693]
Diffusion-based generative models use differential equations to establish a smooth connection between a complex data distribution and a tractable prior distribution.
In this paper, we identify several intriguing trajectory properties in the ODE-based sampling process of diffusion models.
arXiv Detail & Related papers (2024-05-18T15:59:41Z) - Convergence Analysis for General Probability Flow ODEs of Diffusion Models in Wasserstein Distances [9.47767039367222]
We provide the first non-asymptotic convergence analysis for a general class of probability flow ODE samplers in 2-Wasserstein distance.
Our proof technique relies on spelling out explicitly the contraction rate for the continuous-time ODE and analyzing the discretization and score-matching errors using synchronous coupling.
arXiv Detail & Related papers (2024-01-31T16:07:44Z) - Video Anomaly Detection via Spatio-Temporal Pseudo-Anomaly Generation : A Unified Approach [49.995833831087175]
This work proposes a novel method for generating generic Video-temporal PAs by inpainting a masked out region of an image.
In addition, we present a simple unified framework to detect real-world anomalies under the OCC setting.
Our method performs on par with other existing state-of-the-art PAs generation and reconstruction based methods under the OCC setting.
arXiv Detail & Related papers (2023-11-27T13:14:06Z) - A PAC-Bayesian Perspective on the Interpolating Information Criterion [54.548058449535155]
We show how a PAC-Bayes bound is obtained for a general class of models, characterizing factors which influence performance in the interpolating regime.
We quantify how the test error for overparameterized models achieving effectively zero training error depends on the quality of the implicit regularization imposed by e.g. the combination of model, parameter-initialization scheme.
arXiv Detail & Related papers (2023-11-13T01:48:08Z) - Data-driven Modeling and Inference for Bayesian Gaussian Process ODEs
via Double Normalizing Flows [28.62579476863723]
We introduce normalizing flows to re parameterize the ODE vector field, resulting in a data-driven prior distribution.
We also apply normalizing flows to the posterior inference of GP ODEs to resolve the issue of strong mean-field assumptions.
We validate the effectiveness of our approach on simulated dynamical systems and real-world human motion data.
arXiv Detail & Related papers (2023-09-17T09:28:47Z) - Provably Convergent Schr\"odinger Bridge with Applications to
Probabilistic Time Series Imputation [21.27702285561771]
We present a first convergence analysis of the Schr"odinger bridge algorithm based on approximated projections.
As for its practical applications, we apply SBP to probabilistic time series imputation by generating missing values conditioned on observed data.
arXiv Detail & Related papers (2023-05-12T04:39:01Z) - Reflected Diffusion Models [93.26107023470979]
We present Reflected Diffusion Models, which reverse a reflected differential equation evolving on the support of the data.
Our approach learns the score function through a generalized score matching loss and extends key components of standard diffusion models.
arXiv Detail & Related papers (2023-04-10T17:54:38Z) - Invariance Principle Meets Out-of-Distribution Generalization on Graphs [66.04137805277632]
Complex nature of graphs thwarts the adoption of the invariance principle for OOD generalization.
domain or environment partitions, which are often required by OOD methods, can be expensive to obtain for graphs.
We propose a novel framework to explicitly model this process using a contrastive strategy.
arXiv Detail & Related papers (2022-02-11T04:38:39Z) - CASTLE: Regularization via Auxiliary Causal Graph Discovery [89.74800176981842]
We introduce Causal Structure Learning (CASTLE) regularization and propose to regularize a neural network by jointly learning the causal relationships between variables.
CASTLE efficiently reconstructs only the features in the causal DAG that have a causal neighbor, whereas reconstruction-based regularizers suboptimally reconstruct all input features.
arXiv Detail & Related papers (2020-09-28T09:49:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.