Deep Kalman Filters Can Filter
- URL: http://arxiv.org/abs/2310.19603v1
- Date: Mon, 30 Oct 2023 14:58:12 GMT
- Title: Deep Kalman Filters Can Filter
- Authors: Blanka Hovart, Anastasis Kratsios, Yannick Limmer, Xuwei Yang
- Abstract summary: Deep Kalman filters (DKFs) are a class of neural network models that generate Gaussian probability measures from sequential data.
DKFs are inspired by the Kalman filter, but they lack concrete theoretical ties to the filtering problem.
We show that continuous-time DKFs can implement the conditional law of a broad class of non-Markovian and conditionally Gaussian signal processes.
- Score: 9.131190818372474
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep Kalman filters (DKFs) are a class of neural network models that generate
Gaussian probability measures from sequential data. Though DKFs are inspired by
the Kalman filter, they lack concrete theoretical ties to the stochastic
filtering problem, thus limiting their applicability to areas where traditional
model-based filters have been used, e.g.\ model calibration for bond and option
prices in mathematical finance. We address this issue in the mathematical
foundations of deep learning by exhibiting a class of continuous-time DKFs
which can approximately implement the conditional law of a broad class of
non-Markovian and conditionally Gaussian signal processes given noisy
continuous-times measurements. Our approximation results hold uniformly over
sufficiently regular compact subsets of paths, where the approximation error is
quantified by the worst-case 2-Wasserstein distance computed uniformly over the
given compact set of paths.
Related papers
- A competitive baseline for deep learning enhanced data assimilation using conditional Gaussian ensemble Kalman filtering [0.0]
We study two non-linear extensions of the vanilla EnKF, dubbed the conditional-Gaussian EnKF (CG-EnKF) and the normal score EnKF (NS-EnKF)
We compare these models against a state-of-the-art deep learning based particle filter called the score filter (SF)
Our analysis also demonstrates that the CG-EnKF and NS-EnKF can handle highly non-Gaussian additive noise perturbations, with the latter typically outperforming the former.
arXiv Detail & Related papers (2024-09-22T02:54:33Z) - Outlier-robust Kalman Filtering through Generalised Bayes [45.51425214486509]
We derive a novel, provably robust, and closed-form Bayesian update rule for online filtering in state-space models.
Our method matches or outperforms other robust filtering methods at a much lower computational cost.
arXiv Detail & Related papers (2024-05-09T09:40:56Z) - Closed-form Filtering for Non-linear Systems [83.91296397912218]
We propose a new class of filters based on Gaussian PSD Models, which offer several advantages in terms of density approximation and computational efficiency.
We show that filtering can be efficiently performed in closed form when transitions and observations are Gaussian PSD Models.
Our proposed estimator enjoys strong theoretical guarantees, with estimation error that depends on the quality of the approximation and is adaptive to the regularity of the transition probabilities.
arXiv Detail & Related papers (2024-02-15T08:51:49Z) - Pathspace Kalman Filters with Dynamic Process Uncertainty for Analyzing Time-course Data [4.350285695981938]
We develop a Pathspace Kalman Filter (PKF) which allows us to track the uncertainties associated with the underlying data and prior knowledge.
An application of this algorithm is to automatically detect temporal windows where the internal mechanistic model deviates from the data in a time-dependent manner.
We numerically demonstrate that the PKF outperforms conventional KF methods on a synthetic dataset lowering the mean-squared-error by several orders of magnitude.
arXiv Detail & Related papers (2024-02-07T00:54:35Z) - Outlier-Insensitive Kalman Filtering Using NUV Priors [24.413595920205907]
In practice, observations are corrupted by outliers, severely impairing the Kalman filter (KF)s performance.
In this work, an outlier-insensitive KF is proposed, where is achieved by modeling each potential outlier as a normally distributed random variable with unknown variance (NUV)
The NUVs variances are estimated online, using both expectation-maximization (EM) and alternating robustness (AM)
arXiv Detail & Related papers (2022-10-12T11:00:13Z) - Computational Doob's h-transforms for Online Filtering of Discretely
Observed Diffusions [65.74069050283998]
We propose a computational framework to approximate Doob's $h$-transforms.
The proposed approach can be orders of magnitude more efficient than state-of-the-art particle filters.
arXiv Detail & Related papers (2022-06-07T15:03:05Z) - Deep Learning for the Benes Filter [91.3755431537592]
We present a new numerical method based on the mesh-free neural network representation of the density of the solution of the Benes model.
We discuss the role of nonlinearity in the filtering model equations for the choice of the domain of the neural network.
arXiv Detail & Related papers (2022-03-09T14:08:38Z) - KalmanNet: Neural Network Aided Kalman Filtering for Partially Known
Dynamics [84.18625250574853]
We present KalmanNet, a real-time state estimator that learns from data to carry out Kalman filtering under non-linear dynamics.
We numerically demonstrate that KalmanNet overcomes nonlinearities and model mismatch, outperforming classic filtering methods.
arXiv Detail & Related papers (2021-07-21T12:26:46Z) - Neural Kalman Filtering [62.997667081978825]
We show that a gradient-descent approximation to the Kalman filter requires only local computations with variance weighted prediction errors.
We also show that it is possible under the same scheme to adaptively learn the dynamics model with a learning rule that corresponds directly to Hebbian plasticity.
arXiv Detail & Related papers (2021-02-19T16:43:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.