Optimization or Architecture: How to Hack Kalman Filtering
- URL: http://arxiv.org/abs/2310.00675v1
- Date: Sun, 1 Oct 2023 14:00:18 GMT
- Title: Optimization or Architecture: How to Hack Kalman Filtering
- Authors: Ido Greenberg, Netanel Yannay, Shie Mannor
- Abstract summary: In non-linear filtering, it is traditional to compare non-linear architectures such as neural networks to the standard linear Kalman Filter (KF)
We argue that both should be optimized similarly, and to that end present the Optimized KF (OKF)
- Score: 52.640789351385266
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In non-linear filtering, it is traditional to compare non-linear
architectures such as neural networks to the standard linear Kalman Filter
(KF). We observe that this mixes the evaluation of two separate components: the
non-linear architecture, and the parameters optimization method. In particular,
the non-linear model is often optimized, whereas the reference KF model is not.
We argue that both should be optimized similarly, and to that end present the
Optimized KF (OKF). We demonstrate that the KF may become competitive to neural
models - if optimized using OKF. This implies that experimental conclusions of
certain previous studies were derived from a flawed process. The advantage of
OKF over the standard KF is further studied theoretically and empirically, in a
variety of problems. Conveniently, OKF can replace the KF in real-world systems
by merely updating the parameters.
Related papers
- A competitive baseline for deep learning enhanced data assimilation using conditional Gaussian ensemble Kalman filtering [0.0]
We study two non-linear extensions of the vanilla EnKF, dubbed the conditional-Gaussian EnKF (CG-EnKF) and the normal score EnKF (NS-EnKF)
We compare these models against a state-of-the-art deep learning based particle filter called the score filter (SF)
Our analysis also demonstrates that the CG-EnKF and NS-EnKF can handle highly non-Gaussian additive noise perturbations, with the latter typically outperforming the former.
arXiv Detail & Related papers (2024-09-22T02:54:33Z) - Inverse Cubature and Quadrature Kalman filters [16.975704972827305]
We develop inverse cubature KF (I-CKF), inverse quadrature KF (I-QKF), and inverse cubature-quadrature KF (I-CQKF)
We derive the stability conditions for the proposed filters in the exponential-mean-squared-boundedness sense and prove the filters' consistency.
arXiv Detail & Related papers (2023-03-18T03:48:39Z) - Online Hyperparameter Optimization for Class-Incremental Learning [99.70569355681174]
Class-incremental learning (CIL) aims to train a classification model while the number of classes increases phase-by-phase.
An inherent challenge of CIL is the stability-plasticity tradeoff, i.e., CIL models should keep stable to retain old knowledge and keep plastic to absorb new knowledge.
We propose an online learning method that can adaptively optimize the tradeoff without knowing the setting as a priori.
arXiv Detail & Related papers (2023-01-11T17:58:51Z) - Model-based Causal Bayesian Optimization [78.120734120667]
We propose model-based causal Bayesian optimization (MCBO)
MCBO learns a full system model instead of only modeling intervention-reward pairs.
Unlike in standard Bayesian optimization, our acquisition function cannot be evaluated in closed form.
arXiv Detail & Related papers (2022-11-18T14:28:21Z) - Outlier-Insensitive Kalman Filtering Using NUV Priors [24.413595920205907]
In practice, observations are corrupted by outliers, severely impairing the Kalman filter (KF)s performance.
In this work, an outlier-insensitive KF is proposed, where is achieved by modeling each potential outlier as a normally distributed random variable with unknown variance (NUV)
The NUVs variances are estimated online, using both expectation-maximization (EM) and alternating robustness (AM)
arXiv Detail & Related papers (2022-10-12T11:00:13Z) - Optimizing Partial Area Under the Top-k Curve: Theory and Practice [151.5072746015253]
We develop a novel metric named partial Area Under the top-k Curve (AUTKC)
AUTKC has a better discrimination ability, and its Bayes optimal score function could give a correct top-K ranking with respect to the conditional probability.
We present an empirical surrogate risk minimization framework to optimize the proposed metric.
arXiv Detail & Related papers (2022-09-03T11:09:13Z) - Inverse Extended Kalman Filter -- Part II: Highly Non-Linear and
Uncertain Systems [18.244578289687123]
This paper proposes an inverse extended Kalman filter (I-EKF) to address the inverse filtering problem in non-linear systems.
Part I: Theory of I-EKF (with and without unknown inputs) and I-KF (with unknown inputs)
Part II: Theory of I-EKF (with and without unknown inputs) and I-KF (with unknown inputs)
arXiv Detail & Related papers (2022-08-13T16:55:39Z) - Using Kalman Filter The Right Way: Noise Estimation Is Not Optimal [46.556605821252276]
We show that even a seemingly small violation of KF assumptions can significantly modify the effective noise.
We suggest a method to apply gradient-based optimization efficiently to the symmetric and positive-definite (SPD) parameters of KF.
arXiv Detail & Related papers (2021-04-06T08:59:15Z) - LQF: Linear Quadratic Fine-Tuning [114.3840147070712]
We present the first method for linearizing a pre-trained model that achieves comparable performance to non-linear fine-tuning.
LQF consists of simple modifications to the architecture, loss function and optimization typically used for classification.
arXiv Detail & Related papers (2020-12-21T06:40:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.