The Stochastic Occupation Kernel Method for System Identification
- URL: http://arxiv.org/abs/2406.15661v1
- Date: Fri, 21 Jun 2024 21:36:18 GMT
- Title: The Stochastic Occupation Kernel Method for System Identification
- Authors: Michael Wells, Kamel Lahouel, Bruno Jedynak,
- Abstract summary: We propose a two-step method for learning the drift and diffusion of a differential equation given snapshots of the process.
In the first step, we learn the drift by applying the occupation kernel algorithm to the expected value of the process.
In the second step, we learn the diffusion given the drift using a semi-definite program.
- Score: 0.786519149320184
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The method of occupation kernels has been used to learn ordinary differential equations from data in a non-parametric way. We propose a two-step method for learning the drift and diffusion of a stochastic differential equation given snapshots of the process. In the first step, we learn the drift by applying the occupation kernel algorithm to the expected value of the process. In the second step, we learn the diffusion given the drift using a semi-definite program. Specifically, we learn the diffusion squared as a non-negative function in a RKHS associated with the square of a kernel. We present examples and simulations.
Related papers
- The Stochastic Occupation Kernel (SOCK) Method for Learning Stochastic Differential Equations [0.786519149320184]
We present a novel kernel-based method for learning multivariate differential equations (SDEs)<n>We first estimate the drift term function, then the (matrix-valued) diffusion function given the drift.<n>We propose a simple learning procedure that retains strong predictive accuracy while using Fenchel duality to promote efficiency.
arXiv Detail & Related papers (2025-05-16T18:38:50Z) - Unsupervised Discovery of Interpretable Directions in h-space of
Pre-trained Diffusion Models [63.1637853118899]
We propose the first unsupervised and learning-based method to identify interpretable directions in h-space of pre-trained diffusion models.
We employ a shift control module that works on h-space of pre-trained diffusion models to manipulate a sample into a shifted version of itself.
By jointly optimizing them, the model will spontaneously discover disentangled and interpretable directions.
arXiv Detail & Related papers (2023-10-15T18:44:30Z) - Non-Parametric Learning of Stochastic Differential Equations with Non-asymptotic Fast Rates of Convergence [65.63201894457404]
We propose a novel non-parametric learning paradigm for the identification of drift and diffusion coefficients of non-linear differential equations.
The key idea essentially consists of fitting a RKHS-based approximation of the corresponding Fokker-Planck equation to such observations.
arXiv Detail & Related papers (2023-05-24T20:43:47Z) - Fast Sampling of Diffusion Models via Operator Learning [74.37531458470086]
We use neural operators, an efficient method to solve the probability flow differential equations, to accelerate the sampling process of diffusion models.
Compared to other fast sampling methods that have a sequential nature, we are the first to propose a parallel decoding method.
We show our method achieves state-of-the-art FID of 3.78 for CIFAR-10 and 7.83 for ImageNet-64 in the one-model-evaluation setting.
arXiv Detail & Related papers (2022-11-24T07:30:27Z) - On the Benefits of Large Learning Rates for Kernel Methods [110.03020563291788]
We show that a phenomenon can be precisely characterized in the context of kernel methods.
We consider the minimization of a quadratic objective in a separable Hilbert space, and show that with early stopping, the choice of learning rate influences the spectral decomposition of the obtained solution.
arXiv Detail & Related papers (2022-02-28T13:01:04Z) - Gaussian Processes and Statistical Decision-making in Non-Euclidean
Spaces [96.53463532832939]
We develop techniques for broadening the applicability of Gaussian processes.
We introduce a wide class of efficient approximations built from this viewpoint.
We develop a collection of Gaussian process models over non-Euclidean spaces.
arXiv Detail & Related papers (2022-02-22T01:42:57Z) - Temporal Difference Learning with Continuous Time and State in the
Stochastic Setting [0.0]
We consider the problem of continuous-time policy evaluation.
This consists in learning through observations the value function associated with an uncontrolled continuous-time dynamic and a reward function.
arXiv Detail & Related papers (2022-02-16T10:10:53Z) - A Kernel Learning Method for Backward SDE Filter [1.7035011973665108]
We develop a kernel learning backward SDE filter method to propagate the state of a dynamical system based on its partial noisy observations.
We introduce a kernel learning method to learn a continuous global approximation for the conditional probability density function of the target state.
Numerical experiments demonstrate that the kernel learning backward SDE is highly effective and highly efficient.
arXiv Detail & Related papers (2022-01-25T19:49:19Z) - Large-Scale Wasserstein Gradient Flows [84.73670288608025]
We introduce a scalable scheme to approximate Wasserstein gradient flows.
Our approach relies on input neural networks (ICNNs) to discretize the JKO steps.
As a result, we can sample from the measure at each step of the gradient diffusion and compute its density.
arXiv Detail & Related papers (2021-06-01T19:21:48Z) - Learning interaction kernels in mean-field equations of 1st-order
systems of interacting particles [1.776746672434207]
We introduce a nonparametric algorithm to learn interaction kernels of mean-field equations for 1st-order systems of interacting particles.
By at least squares with regularization, the algorithm learns the kernel on data-adaptive hypothesis spaces efficiently.
arXiv Detail & Related papers (2020-10-29T15:37:17Z) - A Mean-Field Theory for Learning the Sch\"{o}nberg Measure of Radial
Basis Functions [13.503048325896174]
We learn the distribution in the Sch"onberg integral representation of the radial basis functions from training samples.
We prove that in the scaling limits, the empirical measure of the Langevin particles converges to the law of a reflected Ito diffusion-drift process.
arXiv Detail & Related papers (2020-06-23T21:04:48Z) - Nonparametric Bayesian volatility learning under microstructure noise [2.812395851874055]
We study the problem of learning the volatility under market microstructure noise.
Specifically, we consider noisy discrete time observations from a differential equation.
We develop a novel computational method to learn the diffusion coefficient of the equation.
arXiv Detail & Related papers (2018-05-15T07:32:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.