Non-Parametric Learning of Stochastic Differential Equations with Non-asymptotic Fast Rates of Convergence
- URL: http://arxiv.org/abs/2305.15557v2
- Date: Tue, 23 Apr 2024 11:34:52 GMT
- Title: Non-Parametric Learning of Stochastic Differential Equations with Non-asymptotic Fast Rates of Convergence
- Authors: Riccardo Bonalli, Alessandro Rudi,
- Abstract summary: We propose a novel non-parametric learning paradigm for the identification of drift and diffusion coefficients of non-linear differential equations.
The key idea essentially consists of fitting a RKHS-based approximation of the corresponding Fokker-Planck equation to such observations.
- Score: 65.63201894457404
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel non-parametric learning paradigm for the identification of drift and diffusion coefficients of multi-dimensional non-linear stochastic differential equations, which relies upon discrete-time observations of the state. The key idea essentially consists of fitting a RKHS-based approximation of the corresponding Fokker-Planck equation to such observations, yielding theoretical estimates of non-asymptotic learning rates which, unlike previous works, become increasingly tighter when the regularity of the unknown drift and diffusion coefficients becomes higher. Our method being kernel-based, offline pre-processing may be profitably leveraged to enable efficient numerical implementation, offering excellent balance between precision and computational complexity.
Related papers
- Kinetic Interacting Particle Langevin Monte Carlo [0.0]
This paper introduces and analyses interacting underdamped Langevin algorithms, for statistical inference in latent variable models.
We propose a diffusion process that evolves jointly in the space of parameters and latent variables.
We provide two explicit discretisations of this diffusion as practical algorithms to estimate parameters of statistical models.
arXiv Detail & Related papers (2024-07-08T09:52:46Z) - DynGMA: a robust approach for learning stochastic differential equations from data [13.858051019755283]
We introduce novel approximations to the transition density of the parameterized SDE.
Our method exhibits superior accuracy compared to baseline methods in learning the fully unknown drift diffusion functions.
It is capable of handling data with low time resolution and variable, even uncontrollable, time step sizes.
arXiv Detail & Related papers (2024-02-22T12:09:52Z) - Moment Estimation for Nonparametric Mixture Models Through Implicit
Tensor Decomposition [7.139680863764187]
We present an alternating least squares type numerical optimization scheme to estimate conditionally-independent mixture models in $mathbbRn$.
We compute the cumulative distribution functions, higher moments and other statistics of the component distributions through linear solves.
Numerical experiments demonstrate the competitive performance of the algorithm, and its applicability to many models and applications.
arXiv Detail & Related papers (2022-10-25T23:31:33Z) - Posterior and Computational Uncertainty in Gaussian Processes [52.26904059556759]
Gaussian processes scale prohibitively with the size of the dataset.
Many approximation methods have been developed, which inevitably introduce approximation error.
This additional source of uncertainty, due to limited computation, is entirely ignored when using the approximate posterior.
We develop a new class of methods that provides consistent estimation of the combined uncertainty arising from both the finite number of data observed and the finite amount of computation expended.
arXiv Detail & Related papers (2022-05-30T22:16:25Z) - Nonlinear Independent Component Analysis for Continuous-Time Signals [85.59763606620938]
We study the classical problem of recovering a multidimensional source process from observations of mixtures of this process.
We show that this recovery is possible for many popular models of processes (up to order and monotone scaling of their coordinates) if the mixture is given by a sufficiently differentiable, invertible function.
arXiv Detail & Related papers (2021-02-04T20:28:44Z) - Boosting in Univariate Nonparametric Maximum Likelihood Estimation [5.770800671793959]
Nonparametric maximum likelihood estimation is intended to infer the unknown density distribution.
To alleviate the over parameterization in nonparametric data fitting, smoothing assumptions are usually merged into the estimation.
We deduce the boosting algorithm by the second-order approximation of nonparametric log-likelihood.
arXiv Detail & Related papers (2021-01-21T08:46:33Z) - The Connection between Discrete- and Continuous-Time Descriptions of
Gaussian Continuous Processes [60.35125735474386]
We show that discretizations yielding consistent estimators have the property of invariance under coarse-graining'
This result explains why combining differencing schemes for derivatives reconstruction and local-in-time inference approaches does not work for time series analysis of second or higher order differential equations.
arXiv Detail & Related papers (2021-01-16T17:11:02Z) - Learning Fast Approximations of Sparse Nonlinear Regression [50.00693981886832]
In this work, we bridge the gap by introducing the Threshold Learned Iterative Shrinkage Algorithming (NLISTA)
Experiments on synthetic data corroborate our theoretical results and show our method outperforms state-of-the-art methods.
arXiv Detail & Related papers (2020-10-26T11:31:08Z) - A Deterministic Approximation to Neural SDEs [38.23826389188657]
We show that obtaining well-calibrated uncertainty estimations from NSDEs is computationally prohibitive.
We develop a computationally affordable deterministic scheme which accurately approximates the transition kernel.
Our method also improves prediction accuracy thanks to the numerical stability of deterministic training.
arXiv Detail & Related papers (2020-06-16T08:00:26Z) - On Learning Rates and Schr\"odinger Operators [105.32118775014015]
We present a general theoretical analysis of the effect of the learning rate.
We find that the learning rate tends to zero for a broad non- neural class functions.
arXiv Detail & Related papers (2020-04-15T09:52:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.