Tensor network square root Kalman filter for online Gaussian process regression
- URL: http://arxiv.org/abs/2409.03276v1
- Date: Thu, 5 Sep 2024 06:38:27 GMT
- Title: Tensor network square root Kalman filter for online Gaussian process regression
- Authors: Clara Menzen, Manon Kok, Kim Batselier,
- Abstract summary: We develop, for the first time, a tensor network square root Kalman filter, and apply it to high-dimensional online Gaussian process regression.
In our experiments, we demonstrate that our method is equivalent to the conventional Kalman filter when choosing a full-rank tensor network.
We also apply our method to a real-life system identification problem where we estimate $414$ parameters on a standard laptop.
- Score: 5.482420806459269
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The state-of-the-art tensor network Kalman filter lifts the curse of dimensionality for high-dimensional recursive estimation problems. However, the required rounding operation can cause filter divergence due to the loss of positive definiteness of covariance matrices. We solve this issue by developing, for the first time, a tensor network square root Kalman filter, and apply it to high-dimensional online Gaussian process regression. In our experiments, we demonstrate that our method is equivalent to the conventional Kalman filter when choosing a full-rank tensor network. Furthermore, we apply our method to a real-life system identification problem where we estimate $4^{14}$ parameters on a standard laptop. The estimated model outperforms the state-of-the-art tensor network Kalman filter in terms of prediction accuracy and uncertainty quantification.
Related papers
- Uncertainty Representations in State-Space Layers for Deep Reinforcement Learning under Partial Observability [59.758009422067]
We propose a standalone Kalman filter layer that performs closed-form Gaussian inference in linear state-space models.
Similar to efficient linear recurrent layers, the Kalman filter layer processes sequential data using a parallel scan.
Experiments show that Kalman filter layers excel in problems where uncertainty reasoning is key for decision-making, outperforming other stateful models.
arXiv Detail & Related papers (2024-09-25T11:22:29Z) - Outlier-robust Kalman Filtering through Generalised Bayes [45.51425214486509]
We derive a novel, provably robust, and closed-form Bayesian update rule for online filtering in state-space models.
Our method matches or outperforms other robust filtering methods at a much lower computational cost.
arXiv Detail & Related papers (2024-05-09T09:40:56Z) - An adaptive ensemble filter for heavy-tailed distributions: tuning-free
inflation and localization [0.3749861135832072]
Heavy tails is a common feature of filtering distributions that results from the nonlinear dynamical and observation processes.
We propose an algorithm to estimate the prior-to-posterior update from samples of joint forecast distribution of the states and observations.
We demonstrate the benefits of this new ensemble filter on challenging filtering problems.
arXiv Detail & Related papers (2023-10-12T21:56:14Z) - Low-rank extended Kalman filtering for online learning of neural
networks from streaming data [71.97861600347959]
We propose an efficient online approximate Bayesian inference algorithm for estimating the parameters of a nonlinear function from a potentially non-stationary data stream.
The method is based on the extended Kalman filter (EKF), but uses a novel low-rank plus diagonal decomposition of the posterior matrix.
In contrast to methods based on variational inference, our method is fully deterministic, and does not require step-size tuning.
arXiv Detail & Related papers (2023-05-31T03:48:49Z) - Deep Learning for the Benes Filter [91.3755431537592]
We present a new numerical method based on the mesh-free neural network representation of the density of the solution of the Benes model.
We discuss the role of nonlinearity in the filtering model equations for the choice of the domain of the neural network.
arXiv Detail & Related papers (2022-03-09T14:08:38Z) - KalmanNet: Neural Network Aided Kalman Filtering for Partially Known
Dynamics [84.18625250574853]
We present KalmanNet, a real-time state estimator that learns from data to carry out Kalman filtering under non-linear dynamics.
We numerically demonstrate that KalmanNet overcomes nonlinearities and model mismatch, outperforming classic filtering methods.
arXiv Detail & Related papers (2021-07-21T12:26:46Z) - KaFiStO: A Kalman Filtering Framework for Stochastic Optimization [27.64040983559736]
We show that when training neural networks the loss function changes over (iteration) time due to the randomized selection of a subset of the samples.
This randomization turns the optimization problem into an optimum one.
We propose to consider the loss as a noisy observation with respect to some reference.
arXiv Detail & Related papers (2021-07-07T16:13:57Z) - Neural Kalman Filtering [62.997667081978825]
We show that a gradient-descent approximation to the Kalman filter requires only local computations with variance weighted prediction errors.
We also show that it is possible under the same scheme to adaptively learn the dynamics model with a learning rule that corresponds directly to Hebbian plasticity.
arXiv Detail & Related papers (2021-02-19T16:43:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.