Real-time Linear Operator Construction and State Estimation with the
Kalman Filter
- URL: http://arxiv.org/abs/2001.11256v3
- Date: Fri, 29 May 2020 03:04:27 GMT
- Title: Real-time Linear Operator Construction and State Estimation with the
Kalman Filter
- Authors: Tsuyoshi Ishizone and Kazuyuki Nakamura
- Abstract summary: The proposed method uses three ideas: data in estimation an observation space, a time-invariant interval, and an online learning framework.
By introducing localization and uniformity to the proposed method, we have demonstrated that noise can be reduced in high-dimensional spatial-temporal uniformity.
The proposed method has potential for use in areas such as weather forecasting and vector field analysis.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Kalman filter is the most powerful tool for estimation of the states of a
linear Gaussian system. In addition, using this method, an expectation
maximization algorithm can be used to estimate the parameters of the model.
However, this algorithm cannot function in real time. Thus, we propose a new
method that can be used to estimate the transition matrices and the states of
the system in real time. The proposed method uses three ideas: estimation in an
observation space, a time-invariant interval, and an online learning framework.
Applied to damped oscillation model, we have obtained extraordinary performance
to estimate the matrices. In addition, by introducing localization and spatial
uniformity to the proposed method, we have demonstrated that noise can be
reduced in high-dimensional spatio-temporal data. Moreover, the proposed method
has potential for use in areas such as weather forecasting and vector field
analysis.
Related papers
- Probabilistic Unrolling: Scalable, Inverse-Free Maximum Likelihood
Estimation for Latent Gaussian Models [69.22568644711113]
We introduce probabilistic unrolling, a method that combines Monte Carlo sampling with iterative linear solvers to circumvent matrix inversions.
Our theoretical analyses reveal that unrolling and backpropagation through the iterations of the solver can accelerate gradient estimation for maximum likelihood estimation.
In experiments on simulated and real data, we demonstrate that probabilistic unrolling learns latent Gaussian models up to an order of magnitude faster than gradient EM, with minimal losses in model performance.
arXiv Detail & Related papers (2023-06-05T21:08:34Z) - Low-rank extended Kalman filtering for online learning of neural
networks from streaming data [71.97861600347959]
We propose an efficient online approximate Bayesian inference algorithm for estimating the parameters of a nonlinear function from a potentially non-stationary data stream.
The method is based on the extended Kalman filter (EKF), but uses a novel low-rank plus diagonal decomposition of the posterior matrix.
In contrast to methods based on variational inference, our method is fully deterministic, and does not require step-size tuning.
arXiv Detail & Related papers (2023-05-31T03:48:49Z) - Dimensionality Collapse: Optimal Measurement Selection for Low-Error
Infinite-Horizon Forecasting [3.5788754401889022]
We solve the problem of sequential linear measurement design as an infinite-horizon problem with the time-averaged trace of the Cram'er-Rao lower bound (CRLB) for forecasting as the cost.
By introducing theoretical results regarding measurements under additive noise from natural exponential families, we construct an equivalent problem from which a local dimensionality reduction can be derived.
This alternative formulation is based on the future collapse of dimensionality inherent in the limiting behavior of many differential equations and can be directly observed in the low-rank structure of the CRLB for forecasting.
arXiv Detail & Related papers (2023-03-27T17:25:04Z) - Online estimation methods for irregular autoregressive models [0.0]
Currently available methods for addressing this problem, the so-called online learning methods, use current parameter estimations and novel data to update the estimators.
In this work we consider three online learning algorithms for parameters estimation in the context of time series models.
arXiv Detail & Related papers (2023-01-31T19:52:04Z) - Gaussian process regression and conditional Karhunen-Lo\'{e}ve models
for data assimilation in inverse problems [68.8204255655161]
We present a model inversion algorithm, CKLEMAP, for data assimilation and parameter estimation in partial differential equation models.
The CKLEMAP method provides better scalability compared to the standard MAP method.
arXiv Detail & Related papers (2023-01-26T18:14:12Z) - Time varying regression with hidden linear dynamics [74.9914602730208]
We revisit a model for time-varying linear regression that assumes the unknown parameters evolve according to a linear dynamical system.
Counterintuitively, we show that when the underlying dynamics are stable the parameters of this model can be estimated from data by combining just two ordinary least squares estimates.
arXiv Detail & Related papers (2021-12-29T23:37:06Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - Data-Driven Shadowgraph Simulation of a 3D Object [50.591267188664666]
We are replacing the numerical code by a computationally cheaper projection based surrogate model.
The model is able to approximate the electric fields at a given time without computing all preceding electric fields as required by numerical methods.
This model has shown a good quality reconstruction in a problem of perturbation of data within a narrow range of simulation parameters and can be used for input data of large size.
arXiv Detail & Related papers (2021-06-01T08:46:04Z) - Sparse Algorithms for Markovian Gaussian Processes [18.999495374836584]
Sparse Markovian processes combine the use of inducing variables with efficient Kalman filter-likes recursion.
We derive a general site-based approach to approximate the non-Gaussian likelihood with local Gaussian terms, called sites.
Our approach results in a suite of novel sparse extensions to algorithms from both the machine learning and signal processing, including variational inference, expectation propagation, and the classical nonlinear Kalman smoothers.
The derived methods are suited to literature-temporal data, where the model has separate inducing points in both time and space.
arXiv Detail & Related papers (2021-03-19T09:50:53Z) - Remaining Useful Life Estimation Under Uncertainty with Causal GraphNets [0.0]
A novel approach for the construction and training of time series models is presented.
The proposed method is appropriate for constructing predictive models for non-stationary time series.
arXiv Detail & Related papers (2020-11-23T21:28:03Z) - Random Matrix Based Extended Target Tracking with Orientation: A New
Model and Inference [0.0]
We propose a novel extended target tracking algorithm which is capable of representing the extent of dynamic objects as an ellipsoid with a time-varying orientation angle.
A diagonal positive semi-definite matrix is defined to model objects' extent within the random matrix framework.
It is not possible to find a closed-form analytical expression for the true posterior because of the absence of conjugacy.
arXiv Detail & Related papers (2020-10-17T16:33:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.