Extending Path-Dependent NJ-ODEs to Noisy Observations and a Dependent
Observation Framework
- URL: http://arxiv.org/abs/2307.13147v2
- Date: Mon, 5 Feb 2024 15:47:06 GMT
- Title: Extending Path-Dependent NJ-ODEs to Noisy Observations and a Dependent
Observation Framework
- Authors: William Andersson, Jakob Heiss, Florian Krach, Josef Teichmann
- Abstract summary: We introduce a new loss function, which allows us to deal with noisy observations and explain why the previously used loss function did not lead to a consistent estimator.
- Score: 6.404122934568861
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The Path-Dependent Neural Jump Ordinary Differential Equation (PD-NJ-ODE) is
a model for predicting continuous-time stochastic processes with irregular and
incomplete observations. In particular, the method learns optimal forecasts
given irregularly sampled time series of incomplete past observations. So far
the process itself and the coordinate-wise observation times were assumed to be
independent and observations were assumed to be noiseless. In this work we
discuss two extensions to lift these restrictions and provide theoretical
guarantees as well as empirical examples for them. In particular, we can lift
the assumption of independence by extending the theory to much more realistic
settings of conditional independence without any need to change the algorithm.
Moreover, we introduce a new loss function, which allows us to deal with noisy
observations and explain why the previously used loss function did not lead to
a consistent estimator.
Related papers
- Learning Unstable Continuous-Time Stochastic Linear Control Systems [0.0]
We study the problem of system identification for continuous-time dynamics, based on a single finite-length state trajectory.
We present a method for estimating the possibly unstable open-loop matrix by employing properly randomized control inputs.
We establish theoretical performance guarantees showing that the estimation error decays with trajectory length, a measure of excitability, and the signal-to-noise ratio.
arXiv Detail & Related papers (2024-09-17T16:24:51Z) - A U-turn on Double Descent: Rethinking Parameter Counting in Statistical
Learning [68.76846801719095]
We show that double descent appears exactly when and where it occurs, and that its location is not inherently tied to the threshold p=n.
This provides a resolution to tensions between double descent and statistical intuition.
arXiv Detail & Related papers (2023-10-29T12:05:39Z) - Probabilistic Learning of Multivariate Time Series with Temporal
Irregularity [25.91078012394032]
temporal irregularities, including nonuniform time intervals and component misalignment.
We develop a conditional flow representation to non-parametrically represent the data distribution, which is typically non-Gaussian.
The broad applicability and superiority of the proposed solution are confirmed by comparing it with existing approaches through ablation studies and testing on real-world datasets.
arXiv Detail & Related papers (2023-06-15T14:08:48Z) - Uncovering the Missing Pattern: Unified Framework Towards Trajectory
Imputation and Prediction [60.60223171143206]
Trajectory prediction is a crucial undertaking in understanding entity movement or human behavior from observed sequences.
Current methods often assume that the observed sequences are complete while ignoring the potential for missing values.
This paper presents a unified framework, the Graph-based Conditional Variational Recurrent Neural Network (GC-VRNN), which can perform trajectory imputation and prediction simultaneously.
arXiv Detail & Related papers (2023-03-28T14:27:27Z) - Sequential Predictive Conformal Inference for Time Series [16.38369532102931]
We present a new distribution-free conformal prediction algorithm for sequential data (e.g., time series)
We specifically account for the nature that time series data are non-exchangeable, and thus many existing conformal prediction algorithms are not applicable.
arXiv Detail & Related papers (2022-12-07T05:07:27Z) - Continuous-Time Modeling of Counterfactual Outcomes Using Neural
Controlled Differential Equations [84.42837346400151]
Estimating counterfactual outcomes over time has the potential to unlock personalized healthcare.
Existing causal inference approaches consider regular, discrete-time intervals between observations and treatment decisions.
We propose a controllable simulation environment based on a model of tumor growth for a range of scenarios.
arXiv Detail & Related papers (2022-06-16T17:15:15Z) - Unbiased Estimation using Underdamped Langevin Dynamics [0.0]
We focus upon developing an unbiased method via the underdamped Langevin dynamics.
We prove, under standard assumptions, that our estimator is of finite variance and either has finite expected cost, or has finite cost with a high probability.
arXiv Detail & Related papers (2022-06-14T23:05:56Z) - Nonparametric Conditional Local Independence Testing [69.31200003384122]
Conditional local independence is an independence relation among continuous time processes.
No nonparametric test of conditional local independence has been available.
We propose such a nonparametric test based on double machine learning.
arXiv Detail & Related papers (2022-03-25T10:31:02Z) - Comparing Sequential Forecasters [35.38264087676121]
Consider two forecasters, each making a single prediction for a sequence of events over time.
How might we compare these forecasters, either online or post-hoc, while avoiding unverifiable assumptions on how the forecasts and outcomes were generated?
We present novel sequential inference procedures for estimating the time-varying difference in forecast scores.
We empirically validate our approaches by comparing real-world baseball and weather forecasters.
arXiv Detail & Related papers (2021-09-30T22:54:46Z) - The Connection between Discrete- and Continuous-Time Descriptions of
Gaussian Continuous Processes [60.35125735474386]
We show that discretizations yielding consistent estimators have the property of invariance under coarse-graining'
This result explains why combining differencing schemes for derivatives reconstruction and local-in-time inference approaches does not work for time series analysis of second or higher order differential equations.
arXiv Detail & Related papers (2021-01-16T17:11:02Z) - Nonparametric Score Estimators [49.42469547970041]
Estimating the score from a set of samples generated by an unknown distribution is a fundamental task in inference and learning of probabilistic models.
We provide a unifying view of these estimators under the framework of regularized nonparametric regression.
We propose score estimators based on iterative regularization that enjoy computational benefits from curl-free kernels and fast convergence.
arXiv Detail & Related papers (2020-05-20T15:01:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.