A convergent scheme for the Bayesian filtering problem based on the Fokker--Planck equation and deep splitting
- URL: http://arxiv.org/abs/2409.14585v1
- Date: Sun, 22 Sep 2024 20:25:45 GMT
- Title: A convergent scheme for the Bayesian filtering problem based on the Fokker--Planck equation and deep splitting
- Authors: Kasper BĂ„gmark, Adam Andersson, Stig Larsson, Filip Rydin,
- Abstract summary: A numerical scheme for approximating the nonlinear filtering density is introduced and its convergence rate is established.
For the prediction step, the scheme approximates the Fokker--Planck equation with a deep splitting scheme, and performs an exact update through Bayes' formula.
This results in a classical prediction-update filtering algorithm that operates online for new observation sequences post-training.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A numerical scheme for approximating the nonlinear filtering density is introduced and its convergence rate is established, theoretically under a parabolic H\"{o}rmander condition, and empirically for two examples. For the prediction step, between the noisy and partial measurements at discrete times, the scheme approximates the Fokker--Planck equation with a deep splitting scheme, and performs an exact update through Bayes' formula. This results in a classical prediction-update filtering algorithm that operates online for new observation sequences post-training. The algorithm employs a sampling-based Feynman--Kac approach, designed to mitigate the curse of dimensionality. Our convergence proof relies on the Malliavin integration-by-parts formula. As a corollary we obtain the convergence rate for the approximation of the Fokker--Planck equation alone, disconnected from the filtering problem.
Related papers
- JKO for Landau: a variational particle method for homogeneous Landau equation [7.600098227248821]
We develop a novel implicit particle method for the Landau equation in the framework of the JKO scheme.
We first reformulate the Landau metric in a computationally friendly form, and then translate it into the Lagrangian viewpoint using the flow map.
A key observation is that, while the flow map evolves according to a rather complicated integral equation, the unknown component is merely a score function of the corresponding density.
arXiv Detail & Related papers (2024-09-18T20:08:19Z) - Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - Taming Score-Based Diffusion Priors for Infinite-Dimensional Nonlinear Inverse Problems [4.42498215122234]
This work introduces a sampling method capable of solving Bayesian inverse problems in function space.
It does not assume the log-concavity of the likelihood, meaning that it is compatible with nonlinear inverse problems.
A novel convergence analysis is conducted, inspired by the fixed-point methods established for traditional regularization-by-denoising algorithms.
arXiv Detail & Related papers (2024-05-24T16:17:01Z) - Noise-Free Sampling Algorithms via Regularized Wasserstein Proximals [3.4240632942024685]
We consider the problem of sampling from a distribution governed by a potential function.
This work proposes an explicit score based MCMC method that is deterministic, resulting in a deterministic evolution for particles.
arXiv Detail & Related papers (2023-08-28T23:51:33Z) - Adaptive deep density approximation for fractional Fokker-Planck
equations [6.066542157374599]
We present an explicit PDF model induced by a flow-based deep generative model, KRnet, which constructs a transport map from a simple distribution to the target distribution.
We consider two methods to approximate the fractional Laplacian.
Based on these two different ways for the approximation of the fractional Laplacian, we propose two models, MCNF and GRBFNF, to approximate stationary FPEs and time-dependent FPEs.
arXiv Detail & Related papers (2022-10-26T00:58:17Z) - Computational Doob's h-transforms for Online Filtering of Discretely
Observed Diffusions [65.74069050283998]
We propose a computational framework to approximate Doob's $h$-transforms.
The proposed approach can be orders of magnitude more efficient than state-of-the-art particle filters.
arXiv Detail & Related papers (2022-06-07T15:03:05Z) - Distributed Sketching for Randomized Optimization: Exact
Characterization, Concentration and Lower Bounds [54.51566432934556]
We consider distributed optimization methods for problems where forming the Hessian is computationally challenging.
We leverage randomized sketches for reducing the problem dimensions as well as preserving privacy and improving straggler resilience in asynchronous distributed systems.
arXiv Detail & Related papers (2022-03-18T05:49:13Z) - Deep Learning for the Benes Filter [91.3755431537592]
We present a new numerical method based on the mesh-free neural network representation of the density of the solution of the Benes model.
We discuss the role of nonlinearity in the filtering model equations for the choice of the domain of the neural network.
arXiv Detail & Related papers (2022-03-09T14:08:38Z) - An application of the splitting-up method for the computation of a
neural network representation for the solution for the filtering equations [68.8204255655161]
Filtering equations play a central role in many real-life applications, including numerical weather prediction, finance and engineering.
One of the classical approaches to approximate the solution of the filtering equations is to use a PDE inspired method, called the splitting-up method.
We combine this method with a neural network representation to produce an approximation of the unnormalised conditional distribution of the signal process.
arXiv Detail & Related papers (2022-01-10T11:01:36Z) - Large-Scale Wasserstein Gradient Flows [84.73670288608025]
We introduce a scalable scheme to approximate Wasserstein gradient flows.
Our approach relies on input neural networks (ICNNs) to discretize the JKO steps.
As a result, we can sample from the measure at each step of the gradient diffusion and compute its density.
arXiv Detail & Related papers (2021-06-01T19:21:48Z) - A Bayesian Filtering Algorithm for Gaussian Mixture Models [0.0]
A class of state-space systems can be modelled via Gaussian mixtures.
The exact solution to this filtering problem involves an exponential growth in the number of mixture terms.
A square-root implementation of the unified algorithm is presented.
arXiv Detail & Related papers (2017-05-16T01:05:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.