Efficient learning of hidden state LTI state space models of unknown
order
- URL: http://arxiv.org/abs/2202.01625v1
- Date: Thu, 3 Feb 2022 14:59:13 GMT
- Title: Efficient learning of hidden state LTI state space models of unknown
order
- Authors: Boualem Djehiche and Othmane Mazhar
- Abstract summary: We address two related estimation problems arising in the setup of hidden state linear time invariant (LTI) state space systems.
Namely, the estimation of any finite number of the system's Markov parameters and the estimation of a minimal realization for the system.
For both problems, we provide statistical guarantees in the form of various estimation error upper bounds, $rank$ recovery conditions, and sample complexity estimates.
- Score: 0.7614628596146599
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The aim of this paper is to address two related estimation problems arising
in the setup of hidden state linear time invariant (LTI) state space systems
when the dimension of the hidden state is unknown. Namely, the estimation of
any finite number of the system's Markov parameters and the estimation of a
minimal realization for the system, both from the partial observation of a
single trajectory. For both problems, we provide statistical guarantees in the
form of various estimation error upper bounds, $\rank$ recovery conditions, and
sample complexity estimates.
Specifically, we first show that the low $\rank$ solution of the Hankel
penalized least square estimator satisfies an estimation error in $S_p$-norms
for $p \in [1,2]$ that captures the effect of the system order better than the
existing operator norm upper bound for the simple least square. We then provide
a stability analysis for an estimation procedure based on a variant of the
Ho-Kalman algorithm that improves both the dependence on the dimension and the
least singular value of the Hankel matrix of the Markov parameters. Finally, we
propose an estimation algorithm for the minimal realization that uses both the
Hankel penalized least square estimator and the Ho-Kalman based estimation
procedure and guarantees with high probability that we recover the correct
order of the system and satisfies a new fast rate in the $S_2$-norm with a
polynomial reduction in the dependence on the dimension and other parameters of
the problem.
Related papers
- Trust-Region Sequential Quadratic Programming for Stochastic Optimization with Random Models [57.52124921268249]
We propose a Trust Sequential Quadratic Programming method to find both first and second-order stationary points.
To converge to first-order stationary points, our method computes a gradient step in each iteration defined by minimizing a approximation of the objective subject.
To converge to second-order stationary points, our method additionally computes an eigen step to explore the negative curvature the reduced Hessian matrix.
arXiv Detail & Related papers (2024-09-24T04:39:47Z) - On the existence of unbiased resilient estimators in discrete quantum
systems [0.0]
We compare the performance of Cram'er-Rao and Bhattacharyya bounds when faced with less-than-ideal prior knowledge of the parameter.
For a given system dimension, one can construct estimators in quantum systems that exhibit increased robustness to prior ignorance.
arXiv Detail & Related papers (2024-02-23T10:12:35Z) - Structural Estimation of Markov Decision Processes in High-Dimensional
State Space with Finite-Time Guarantees [39.287388288477096]
We consider the task of estimating a structural model of dynamic decisions by a human agent based upon the observable history of implemented actions and visited states.
This problem has an inherent nested structure: in the inner problem, an optimal policy for a given reward function is identified while in the outer problem, a measure of fit is maximized.
We propose a single-loop estimation algorithm with finite time guarantees that is equipped to deal with high-dimensional state spaces.
arXiv Detail & Related papers (2022-10-04T00:11:38Z) - System Identification via Nuclear Norm Regularization [44.9687872697492]
This paper studies the problem of identifying low-order linear systems via Hankel nuclear norm regularization.
We provide novel statistical analysis for this regularization and carefully contrast it with the unregularized ordinary least-squares (OLS) estimator.
arXiv Detail & Related papers (2022-03-30T20:56:27Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z) - Sparse Identification of Nonlinear Dynamical Systems via Reweighted
$\ell_1$-regularized Least Squares [62.997667081978825]
This work proposes an iterative sparse-regularized regression method to recover governing equations of nonlinear systems from noisy state measurements.
The aim of this work is to improve the accuracy and robustness of the method in the presence of state measurement noise.
arXiv Detail & Related papers (2020-05-27T08:30:15Z) - Instability, Computational Efficiency and Statistical Accuracy [101.32305022521024]
We develop a framework that yields statistical accuracy based on interplay between the deterministic convergence rate of the algorithm at the population level, and its degree of (instability) when applied to an empirical object based on $n$ samples.
We provide applications of our general results to several concrete classes of models, including Gaussian mixture estimation, non-linear regression models, and informative non-response models.
arXiv Detail & Related papers (2020-05-22T22:30:52Z) - Finite-time Identification of Stable Linear Systems: Optimality of the
Least-Squares Estimator [79.3239137440876]
We present a new finite-time analysis of the estimation error of the Ordinary Least Squares (OLS) estimator for stable linear time-invariant systems.
We characterize the number of observed samples sufficient for the OLS estimator to be $(varepsilon,delta)$-PAC, i.e., to yield an estimation error less than $varepsilon$ with probability at least $1-delta$.
arXiv Detail & Related papers (2020-03-17T20:59:17Z) - Adaptive Control and Regret Minimization in Linear Quadratic Gaussian
(LQG) Setting [91.43582419264763]
We propose LqgOpt, a novel reinforcement learning algorithm based on the principle of optimism in the face of uncertainty.
LqgOpt efficiently explores the system dynamics, estimates the model parameters up to their confidence interval, and deploys the controller of the most optimistic model.
arXiv Detail & Related papers (2020-03-12T19:56:38Z) - Bayesian System ID: Optimal management of parameter, model, and
measurement uncertainty [0.0]
We evaluate the robustness of a probabilistic formulation of system identification (ID) to sparse, noisy, and indirect data.
We show that the log posterior has improved geometric properties compared with the objective function surfaces of traditional methods.
arXiv Detail & Related papers (2020-03-04T22:48:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.