Online Learning of the Kalman Filter with Logarithmic Regret
- URL: http://arxiv.org/abs/2002.05141v1
- Date: Wed, 12 Feb 2020 18:31:31 GMT
- Title: Online Learning of the Kalman Filter with Logarithmic Regret
- Authors: Anastasios Tsiamis and George Pappas
- Abstract summary: We show that it is possible to achieve a regret of the order of $mathrmpolylog(N)$ with high probability, where $N$ is the number of observations collected.
This is achieved using an online least-squares algorithm, which exploits the approximately linear relation between future observations and past observations.
- Score: 2.0305676256390934
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we consider the problem of predicting observations generated
online by an unknown, partially observed linear system, which is driven by
stochastic noise. For such systems the optimal predictor in the mean square
sense is the celebrated Kalman filter, which can be explicitly computed when
the system model is known. When the system model is unknown, we have to learn
how to predict observations online based on finite data, suffering possibly a
non-zero regret with respect to the Kalman filter's prediction. We show that it
is possible to achieve a regret of the order of $\mathrm{poly}\log(N)$ with
high probability, where $N$ is the number of observations collected. Our work
is the first to provide logarithmic regret guarantees for the widely used
Kalman filter. This is achieved using an online least-squares algorithm, which
exploits the approximately linear relation between future observations and past
observations. The regret analysis is based on the stability properties of the
Kalman filter, recent statistical tools for finite sample analysis of system
identification, and classical results for the analysis of least-squares
algorithms for time series. Our regret analysis can also be applied for state
prediction of the hidden state, in the case of unknown noise statistics but
known state-space basis. A fundamental technical contribution is that our
bounds hold even for the class of non-explosive systems, which includes the
class of marginally stable systems, which was an open problem for the case of
online prediction under stochastic noise.
Related papers
- Uncertainty Representations in State-Space Layers for Deep Reinforcement Learning under Partial Observability [59.758009422067]
We propose a standalone Kalman filter layer that performs closed-form Gaussian inference in linear state-space models.
Similar to efficient linear recurrent layers, the Kalman filter layer processes sequential data using a parallel scan.
Experiments show that Kalman filter layers excel in problems where uncertainty reasoning is key for decision-making, outperforming other stateful models.
arXiv Detail & Related papers (2024-09-25T11:22:29Z) - Outlier-robust Kalman Filtering through Generalised Bayes [45.51425214486509]
We derive a novel, provably robust, and closed-form Bayesian update rule for online filtering in state-space models.
Our method matches or outperforms other robust filtering methods at a much lower computational cost.
arXiv Detail & Related papers (2024-05-09T09:40:56Z) - Closed-form Filtering for Non-linear Systems [83.91296397912218]
We propose a new class of filters based on Gaussian PSD Models, which offer several advantages in terms of density approximation and computational efficiency.
We show that filtering can be efficiently performed in closed form when transitions and observations are Gaussian PSD Models.
Our proposed estimator enjoys strong theoretical guarantees, with estimation error that depends on the quality of the approximation and is adaptive to the regularity of the transition probabilities.
arXiv Detail & Related papers (2024-02-15T08:51:49Z) - Kalman Filtering with Adversarial Corruptions [33.99155519390116]
We give the first strong provable guarantees for linear quadratic estimation when even a constant fraction of measurements have been adversarially corrupted.
Our work is in a challenging Bayesian setting where the number of measurements scales with the complexity of what we need to estimate.
We develop a suite of new techniques to robustly extract information across different time steps and over varying time scales.
arXiv Detail & Related papers (2021-11-11T18:59:21Z) - Uncertainty in Data-Driven Kalman Filtering for Partially Known
State-Space Models [84.18625250574853]
We investigate the ability of KalmanNet, a proposed hybrid model-based deep state tracking algorithm, to estimate an uncertainty measure.
We show that the error covariance matrix can be computed based on its internal features, as an uncertainty measure.
We demonstrate that when the system dynamics are known, KalmanNet-which learns its mapping from data without access to the statistics-provides uncertainty similar to that provided by the Kalman filter.
arXiv Detail & Related papers (2021-10-10T08:52:18Z) - KalmanNet: Neural Network Aided Kalman Filtering for Partially Known
Dynamics [84.18625250574853]
We present KalmanNet, a real-time state estimator that learns from data to carry out Kalman filtering under non-linear dynamics.
We numerically demonstrate that KalmanNet overcomes nonlinearities and model mismatch, outperforming classic filtering methods.
arXiv Detail & Related papers (2021-07-21T12:26:46Z) - Neural Kalman Filtering [62.997667081978825]
We show that a gradient-descent approximation to the Kalman filter requires only local computations with variance weighted prediction errors.
We also show that it is possible under the same scheme to adaptively learn the dynamics model with a learning rule that corresponds directly to Hebbian plasticity.
arXiv Detail & Related papers (2021-02-19T16:43:15Z) - SLIP: Learning to Predict in Unknown Dynamical Systems with Long-Term
Memory [21.09861411069719]
We present an efficient and practical (polynomial time) algorithm for online prediction in unknown and partially observed linear dynamical systems.
Our algorithm competes with Kalman filter in hindsight with only logarithmic regret.
Our theoretical and experimental results shed light on the conditions required for efficient probably approximately correct (PAC) learning of the Kalman filter from partially observed data.
arXiv Detail & Related papers (2020-10-12T17:50:21Z) - Stochastically forced ensemble dynamic mode decomposition for
forecasting and analysis of near-periodic systems [65.44033635330604]
We introduce a novel load forecasting method in which observed dynamics are modeled as a forced linear system.
We show that its use of intrinsic linear dynamics offers a number of desirable properties in terms of interpretability and parsimony.
Results are presented for a test case using load data from an electrical grid.
arXiv Detail & Related papers (2020-10-08T20:25:52Z) - No-Regret Prediction in Marginally Stable Systems [37.178095559618654]
We consider the problem of online prediction in a marginally stable linear dynamical system.
By applying our techniques to learning an autoregressive filter, we also achieve logarithmic regret in the partially observed setting.
arXiv Detail & Related papers (2020-02-06T01:53:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.