The Rank-Reduced Kalman Filter: Approximate Dynamical-Low-Rank Filtering
In High Dimensions
- URL: http://arxiv.org/abs/2306.07774v3
- Date: Wed, 3 Jan 2024 09:48:07 GMT
- Title: The Rank-Reduced Kalman Filter: Approximate Dynamical-Low-Rank Filtering
In High Dimensions
- Authors: Jonathan Schmidt, Philipp Hennig, J\"org Nick, Filip Tronarp
- Abstract summary: We propose a novel approximate filtering and smoothing method which propagates lowrank approximations of low-rank matrices.
Our method reduces computational complexity from cubic (for the Kalman filter) to emphquadratic in the state-space size in the worst-case.
- Score: 32.30527731746912
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inference and simulation in the context of high-dimensional dynamical systems
remain computationally challenging problems. Some form of dimensionality
reduction is required to make the problem tractable in general. In this paper,
we propose a novel approximate Gaussian filtering and smoothing method which
propagates low-rank approximations of the covariance matrices. This is
accomplished by projecting the Lyapunov equations associated with the
prediction step to a manifold of low-rank matrices, which are then solved by a
recently developed, numerically stable, dynamical low-rank integrator.
Meanwhile, the update steps are made tractable by noting that the covariance
update only transforms the column space of the covariance matrix, which is
low-rank by construction. The algorithm differentiates itself from existing
ensemble-based approaches in that the low-rank approximations of the covariance
matrices are deterministic, rather than stochastic. Crucially, this enables the
method to reproduce the exact Kalman filter as the low-rank dimension
approaches the true dimensionality of the problem. Our method reduces
computational complexity from cubic (for the Kalman filter) to \emph{quadratic}
in the state-space size in the worst-case, and can achieve \emph{linear}
complexity if the state-space model satisfies certain criteria. Through a set
of experiments in classical data-assimilation and spatio-temporal regression,
we show that the proposed method consistently outperforms the ensemble-based
methods in terms of error in the mean and covariance with respect to the exact
Kalman filter. This comes at no additional cost in terms of asymptotic
computational complexity.
Related papers
- $\ell_0$ factor analysis [25.14792952830999]
We formulate an optimization problem using the nuclear norm for the low-rank component.
An alternating minimization algorithm is designed for the solution of the optimization problem.
The effectiveness of the algorithm is verified via simulations on synthetic and real datasets.
arXiv Detail & Related papers (2024-11-13T09:40:23Z) - Computation-Aware Kalman Filtering and Smoothing [27.55456716194024]
We propose a probabilistic numerical inference for high-dimensional Gauss-ov models.
Our algorithm leverages GPU acceleration and crucially enables a tunable trade-off between predictive cost and uncertainty.
arXiv Detail & Related papers (2024-05-14T21:31:11Z) - Low-rank extended Kalman filtering for online learning of neural
networks from streaming data [71.97861600347959]
We propose an efficient online approximate Bayesian inference algorithm for estimating the parameters of a nonlinear function from a potentially non-stationary data stream.
The method is based on the extended Kalman filter (EKF), but uses a novel low-rank plus diagonal decomposition of the posterior matrix.
In contrast to methods based on variational inference, our method is fully deterministic, and does not require step-size tuning.
arXiv Detail & Related papers (2023-05-31T03:48:49Z) - Decomposed Diffusion Sampler for Accelerating Large-Scale Inverse
Problems [64.29491112653905]
We propose a novel and efficient diffusion sampling strategy that synergistically combines the diffusion sampling and Krylov subspace methods.
Specifically, we prove that if tangent space at a denoised sample by Tweedie's formula forms a Krylov subspace, then the CG with the denoised data ensures the data consistency update to remain in the tangent space.
Our proposed method achieves more than 80 times faster inference time than the previous state-of-the-art method.
arXiv Detail & Related papers (2023-03-10T07:42:49Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z) - Sampling Approximately Low-Rank Ising Models: MCMC meets Variational
Methods [35.24886589614034]
We consider quadratic definite Ising models on the hypercube with a general interaction $J$.
Our general result implies the first time sampling algorithms for low-rank Ising models.
arXiv Detail & Related papers (2022-02-17T21:43:50Z) - Splitting numerical integration for matrix completion [0.0]
We propose a new algorithm for low rank matrix approximation.
The algorithm is an adaptation of classical gradient descent within the framework of optimization.
Experimental results show that our approach has good scalability for large-scale problems.
arXiv Detail & Related papers (2022-02-14T04:45:20Z) - Sparse Quadratic Optimisation over the Stiefel Manifold with Application
to Permutation Synchronisation [71.27989298860481]
We address the non- optimisation problem of finding a matrix on the Stiefel manifold that maximises a quadratic objective function.
We propose a simple yet effective sparsity-promoting algorithm for finding the dominant eigenspace matrix.
arXiv Detail & Related papers (2021-09-30T19:17:35Z) - KaFiStO: A Kalman Filtering Framework for Stochastic Optimization [27.64040983559736]
We show that when training neural networks the loss function changes over (iteration) time due to the randomized selection of a subset of the samples.
This randomization turns the optimization problem into an optimum one.
We propose to consider the loss as a noisy observation with respect to some reference.
arXiv Detail & Related papers (2021-07-07T16:13:57Z) - Solving weakly supervised regression problem using low-rank manifold
regularization [77.34726150561087]
We solve a weakly supervised regression problem.
Under "weakly" we understand that for some training points the labels are known, for some unknown, and for others uncertain due to the presence of random noise or other reasons such as lack of resources.
In the numerical section, we applied the suggested method to artificial and real datasets using Monte-Carlo modeling.
arXiv Detail & Related papers (2021-04-13T23:21:01Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.