Outlier-Insensitive Kalman Filtering Using NUV Priors
- URL: http://arxiv.org/abs/2210.06083v1
- Date: Wed, 12 Oct 2022 11:00:13 GMT
- Title: Outlier-Insensitive Kalman Filtering Using NUV Priors
- Authors: Shunit Truzman, Guy Revach, Nir Shlezinger, and Itzik Klein
- Abstract summary: In practice, observations are corrupted by outliers, severely impairing the Kalman filter (KF)s performance.
In this work, an outlier-insensitive KF is proposed, where is achieved by modeling each potential outlier as a normally distributed random variable with unknown variance (NUV)
The NUVs variances are estimated online, using both expectation-maximization (EM) and alternating robustness (AM)
- Score: 24.413595920205907
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Kalman filter (KF) is a widely-used algorithm for tracking the latent
state of a dynamical system from noisy observations. For systems that are
well-described by linear Gaussian state space models, the KF minimizes the
mean-squared error (MSE). However, in practice, observations are corrupted by
outliers, severely impairing the KFs performance. In this work, an
outlier-insensitive KF is proposed, where robustness is achieved by modeling
each potential outlier as a normally distributed random variable with unknown
variance (NUV). The NUVs variances are estimated online, using both
expectation-maximization (EM) and alternating maximization (AM). The former was
previously proposed for the task of smoothing with outliers and was adapted
here to filtering, while both EM and AM obtained the same performance and
outperformed the other algorithms, the AM approach is less complex and thus
requires 40 percentage less run-time. Our empirical study demonstrates that the
MSE of our proposed outlier-insensitive KF outperforms previously proposed
algorithms, and that for data clean of outliers, it reverts to the classic KF,
i.e., MSE optimality is preserved
Related papers
- An adaptive ensemble filter for heavy-tailed distributions: tuning-free
inflation and localization [0.3749861135832072]
Heavy tails is a common feature of filtering distributions that results from the nonlinear dynamical and observation processes.
We propose an algorithm to estimate the prior-to-posterior update from samples of joint forecast distribution of the states and observations.
We demonstrate the benefits of this new ensemble filter on challenging filtering problems.
arXiv Detail & Related papers (2023-10-12T21:56:14Z) - Outlier-Insensitive Kalman Filtering: Theory and Applications [26.889182816155838]
We propose a parameter-free algorithm which mitigates harmful effect of outliers while requiring only a short iterative process of the standard update step of the linear Kalman filter.
arXiv Detail & Related papers (2023-09-18T06:33:28Z) - Compound Batch Normalization for Long-tailed Image Classification [77.42829178064807]
We propose a compound batch normalization method based on a Gaussian mixture.
It can model the feature space more comprehensively and reduce the dominance of head classes.
The proposed method outperforms existing methods on long-tailed image classification.
arXiv Detail & Related papers (2022-12-02T07:31:39Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Using Kalman Filter The Right Way: Noise Estimation Is Not Optimal [46.556605821252276]
We show that even a seemingly small violation of KF assumptions can significantly modify the effective noise.
We suggest a method to apply gradient-based optimization efficiently to the symmetric and positive-definite (SPD) parameters of KF.
arXiv Detail & Related papers (2021-04-06T08:59:15Z) - Cauchy-Schwarz Regularized Autoencoder [68.80569889599434]
Variational autoencoders (VAE) are a powerful and widely-used class of generative models.
We introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs.
Our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis.
arXiv Detail & Related papers (2021-01-06T17:36:26Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z) - Revisiting Robust Model Fitting Using Truncated Loss [19.137291311347788]
New algorithms are applied to various 2D/3D registration problems.
They outperform RANSAC and approximate approximate MC methods at high outlier ratios.
New algorithms also compare favorably with state-of-the-art registration methods, especially in high noise and outliers.
arXiv Detail & Related papers (2020-08-04T14:10:41Z) - Least Squares Regression with Markovian Data: Fundamental Limits and
Algorithms [69.45237691598774]
We study the problem of least squares linear regression where the data-points are dependent and are sampled from a Markov chain.
We establish sharp information theoretic minimax lower bounds for this problem in terms of $tau_mathsfmix$.
We propose an algorithm based on experience replay--a popular reinforcement learning technique--that achieves a significantly better error rate.
arXiv Detail & Related papers (2020-06-16T04:26:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.