Using Kalman Filter The Right Way: Noise Estimation Is Not Optimal
- URL: http://arxiv.org/abs/2104.02372v1
- Date: Tue, 6 Apr 2021 08:59:15 GMT
- Title: Using Kalman Filter The Right Way: Noise Estimation Is Not Optimal
- Authors: Ido Greenberg, Shie Mannor, Netanel Yannay
- Abstract summary: We show that even a seemingly small violation of KF assumptions can significantly modify the effective noise.
We suggest a method to apply gradient-based optimization efficiently to the symmetric and positive-definite (SPD) parameters of KF.
- Score: 46.556605821252276
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Determining the noise parameters of a Kalman Filter (KF) has been researched
for decades. The research focuses on estimation of the noise under various
conditions, since noise estimation is considered equivalent to errors
minimization. However, we show that even a seemingly small violation of KF
assumptions can significantly modify the effective noise, breaking the
equivalence between the tasks and making noise estimation a highly sub-optimal
strategy. In particular, whoever tests a new learning-based algorithm in
comparison to a (variant of) KF with standard parameters tuning, essentially
conducts an unfair comparison between an optimized algorithm and a
non-optimized one. We suggest a method (based on Cholesky decomposition) to
apply gradient-based optimization efficiently to the symmetric and
positive-definite (SPD) parameters of KF, so that KF can be optimized similarly
to common neural networks. The benefits of this method are demonstrated for
both Radar tracking and video tracking. For Radar tracking we also show how a
non-linear neural-network-based model can seem to reduce the tracking errors
significantly compared to a KF - and how this reduction entirely vanishes once
the KF is optimized. Through a detailed case-study, we also demonstrate that KF
requires non-trivial design-decisions to be made, and that parameters
optimization makes KF more robust to these decisions.
Related papers
- Gradient Normalization with(out) Clipping Ensures Convergence of Nonconvex SGD under Heavy-Tailed Noise with Improved Results [60.92029979853314]
This paper investigates Gradient Normalization without (NSGDC) its gradient reduction variant (NSGDC-VR)
We present significant improvements in the theoretical results for both algorithms.
arXiv Detail & Related papers (2024-10-21T22:40:42Z) - An Adaptive Re-evaluation Method for Evolution Strategy under Additive Noise [3.92625489118339]
We propose a novel method to adaptively choose the optimal re-evaluation number for function values corrupted by additive Gaussian white noise.
We experimentally compare our method to the state-of-the-art noise-handling methods for CMA-ES on a set of artificial test functions.
arXiv Detail & Related papers (2024-09-25T09:10:21Z) - Optimization or Architecture: How to Hack Kalman Filtering [52.640789351385266]
In non-linear filtering, it is traditional to compare non-linear architectures such as neural networks to the standard linear Kalman Filter (KF)
We argue that both should be optimized similarly, and to that end present the Optimized KF (OKF)
arXiv Detail & Related papers (2023-10-01T14:00:18Z) - Outlier-Insensitive Kalman Filtering Using NUV Priors [24.413595920205907]
In practice, observations are corrupted by outliers, severely impairing the Kalman filter (KF)s performance.
In this work, an outlier-insensitive KF is proposed, where is achieved by modeling each potential outlier as a normally distributed random variable with unknown variance (NUV)
The NUVs variances are estimated online, using both expectation-maximization (EM) and alternating robustness (AM)
arXiv Detail & Related papers (2022-10-12T11:00:13Z) - Greedy versus Map-based Optimized Adaptive Algorithms for
random-telegraph-noise mitigation by spectator qubits [6.305016513788048]
In a scenario where data-storage qubits are kept in isolation as far as possible, noise mitigation can still be done using additional noise probes.
We construct a theoretical model assuming projective measurements on the qubits, and derive the performance of different measurement and control strategies.
We show, analytically and numerically, that MOAAAR outperforms the Greedy algorithm, especially in the regime of high noise sensitivity of SQ.
arXiv Detail & Related papers (2022-05-25T08:25:10Z) - Large-scale Optimization of Partial AUC in a Range of False Positive
Rates [51.12047280149546]
The area under the ROC curve (AUC) is one of the most widely used performance measures for classification models in machine learning.
We develop an efficient approximated gradient descent method based on recent practical envelope smoothing technique.
Our proposed algorithm can also be used to minimize the sum of some ranked range loss, which also lacks efficient solvers.
arXiv Detail & Related papers (2022-03-03T03:46:18Z) - Partial Identification with Noisy Covariates: A Robust Optimization
Approach [94.10051154390237]
Causal inference from observational datasets often relies on measuring and adjusting for covariates.
We show that this robust optimization approach can extend a wide range of causal adjustment methods to perform partial identification.
Across synthetic and real datasets, we find that this approach provides ATE bounds with a higher coverage probability than existing methods.
arXiv Detail & Related papers (2022-02-22T04:24:26Z) - STORM+: Fully Adaptive SGD with Momentum for Nonconvex Optimization [74.1615979057429]
We investigate non-batch optimization problems where the objective is an expectation over smooth loss functions.
Our work builds on the STORM algorithm, in conjunction with a novel approach to adaptively set the learning rate and momentum parameters.
arXiv Detail & Related papers (2021-11-01T15:43:36Z) - KaFiStO: A Kalman Filtering Framework for Stochastic Optimization [27.64040983559736]
We show that when training neural networks the loss function changes over (iteration) time due to the randomized selection of a subset of the samples.
This randomization turns the optimization problem into an optimum one.
We propose to consider the loss as a noisy observation with respect to some reference.
arXiv Detail & Related papers (2021-07-07T16:13:57Z) - A Comparison of Various Classical Optimizers for a Variational Quantum
Linear Solver [0.0]
Variational Hybrid Quantum Classical Algorithms (VHQCAs) are a class of quantum algorithms intended to run on noisy quantum devices.
These algorithms employ a parameterized quantum circuit (ansatz) and a quantum-classical feedback loop.
A classical device is used to optimize the parameters in order to minimize a cost function that can be computed far more efficiently on a quantum device.
arXiv Detail & Related papers (2021-06-16T10:40:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.