Kalman Filtering with Adversarial Corruptions
- URL: http://arxiv.org/abs/2111.06395v1
- Date: Thu, 11 Nov 2021 18:59:21 GMT
- Title: Kalman Filtering with Adversarial Corruptions
- Authors: Sitan Chen, Frederic Koehler, Ankur Moitra, Morris Yau
- Abstract summary: We give the first strong provable guarantees for linear quadratic estimation when even a constant fraction of measurements have been adversarially corrupted.
Our work is in a challenging Bayesian setting where the number of measurements scales with the complexity of what we need to estimate.
We develop a suite of new techniques to robustly extract information across different time steps and over varying time scales.
- Score: 33.99155519390116
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Here we revisit the classic problem of linear quadratic estimation, i.e.
estimating the trajectory of a linear dynamical system from noisy measurements.
The celebrated Kalman filter gives an optimal estimator when the measurement
noise is Gaussian, but is widely known to break down when one deviates from
this assumption, e.g. when the noise is heavy-tailed. Many ad hoc heuristics
have been employed in practice for dealing with outliers. In a pioneering work,
Schick and Mitter gave provable guarantees when the measurement noise is a
known infinitesimal perturbation of a Gaussian and raised the important
question of whether one can get similar guarantees for large and unknown
perturbations.
In this work we give a truly robust filter: we give the first strong provable
guarantees for linear quadratic estimation when even a constant fraction of
measurements have been adversarially corrupted. This framework can model
heavy-tailed and even non-stationary noise processes. Our algorithm robustifies
the Kalman filter in the sense that it competes with the optimal algorithm that
knows the locations of the corruptions. Our work is in a challenging Bayesian
setting where the number of measurements scales with the complexity of what we
need to estimate. Moreover, in linear dynamical systems past information decays
over time. We develop a suite of new techniques to robustly extract information
across different time steps and over varying time scales.
Related papers
- Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Outlier-robust Kalman Filtering through Generalised Bayes [45.51425214486509]
We derive a novel, provably robust, and closed-form Bayesian update rule for online filtering in state-space models.
Our method matches or outperforms other robust filtering methods at a much lower computational cost.
arXiv Detail & Related papers (2024-05-09T09:40:56Z) - Outlier-Insensitive Kalman Filtering: Theory and Applications [26.889182816155838]
We propose a parameter-free algorithm which mitigates harmful effect of outliers while requiring only a short iterative process of the standard update step of the linear Kalman filter.
arXiv Detail & Related papers (2023-09-18T06:33:28Z) - Low-rank extended Kalman filtering for online learning of neural
networks from streaming data [71.97861600347959]
We propose an efficient online approximate Bayesian inference algorithm for estimating the parameters of a nonlinear function from a potentially non-stationary data stream.
The method is based on the extended Kalman filter (EKF), but uses a novel low-rank plus diagonal decomposition of the posterior matrix.
In contrast to methods based on variational inference, our method is fully deterministic, and does not require step-size tuning.
arXiv Detail & Related papers (2023-05-31T03:48:49Z) - Revisiting Rotation Averaging: Uncertainties and Robust Losses [51.64986160468128]
We argue that the main problem of current methods is the minimized cost function that is only weakly connected with the input data via the estimated epipolar.
We propose to better model the underlying noise distributions by directly propagating the uncertainty from the point correspondences into the rotation averaging.
arXiv Detail & Related papers (2023-03-09T11:51:20Z) - Privacy of Noisy Stochastic Gradient Descent: More Iterations without
More Privacy Loss [34.66940399825547]
Industry has widely adopted a simple algorithm: Gradient Descent with noise (a.k.a. Gradient Langevin Dynamics)
Questions about this algorithm's privacy loss remain open -- even in the seemingly simple setting of smooth convex losses over a bounded domain.
We characterize the differential privacy up to a constant factor and show that after a small burn-in period, running SGD longer leaks no further privacy.
arXiv Detail & Related papers (2022-05-27T02:09:55Z) - Partial Identification with Noisy Covariates: A Robust Optimization
Approach [94.10051154390237]
Causal inference from observational datasets often relies on measuring and adjusting for covariates.
We show that this robust optimization approach can extend a wide range of causal adjustment methods to perform partial identification.
Across synthetic and real datasets, we find that this approach provides ATE bounds with a higher coverage probability than existing methods.
arXiv Detail & Related papers (2022-02-22T04:24:26Z) - A New Adaptive Noise Covariance Matrices Estimation and Filtering
Method: Application to Multi-Object Tracking [6.571006663689735]
Kalman filters are widely used for object tracking, where process and measurement noise are usually considered accurately known and constant.
This paper proposes a new estimation-correction closed-loop estimation method to estimate the Kalman filter process and measurement noise covariance matrices online.
arXiv Detail & Related papers (2021-12-20T03:11:48Z) - Least Squares Regression with Markovian Data: Fundamental Limits and
Algorithms [69.45237691598774]
We study the problem of least squares linear regression where the data-points are dependent and are sampled from a Markov chain.
We establish sharp information theoretic minimax lower bounds for this problem in terms of $tau_mathsfmix$.
We propose an algorithm based on experience replay--a popular reinforcement learning technique--that achieves a significantly better error rate.
arXiv Detail & Related papers (2020-06-16T04:26:50Z) - Online Learning of the Kalman Filter with Logarithmic Regret [2.0305676256390934]
We show that it is possible to achieve a regret of the order of $mathrmpolylog(N)$ with high probability, where $N$ is the number of observations collected.
This is achieved using an online least-squares algorithm, which exploits the approximately linear relation between future observations and past observations.
arXiv Detail & Related papers (2020-02-12T18:31:31Z) - Contextual Linear Bandits under Noisy Features: Towards Bayesian Oracles [65.9694455739978]
We study contextual linear bandit problems under feature uncertainty, where the features are noisy and have missing entries.
Our analysis reveals that the optimal hypothesis can significantly deviate from the underlying realizability function, depending on the noise characteristics.
This implies that classical approaches cannot guarantee a non-trivial regret bound.
arXiv Detail & Related papers (2017-03-03T21:39:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.