System Identification via Nuclear Norm Regularization
- URL: http://arxiv.org/abs/2203.16673v1
- Date: Wed, 30 Mar 2022 20:56:27 GMT
- Title: System Identification via Nuclear Norm Regularization
- Authors: Yue Sun and Samet Oymak and Maryam Fazel
- Abstract summary: This paper studies the problem of identifying low-order linear systems via Hankel nuclear norm regularization.
We provide novel statistical analysis for this regularization and carefully contrast it with the unregularized ordinary least-squares (OLS) estimator.
- Score: 44.9687872697492
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper studies the problem of identifying low-order linear systems via
Hankel nuclear norm regularization. Hankel regularization encourages the
low-rankness of the Hankel matrix, which maps to the low-orderness of the
system. We provide novel statistical analysis for this regularization and
carefully contrast it with the unregularized ordinary least-squares (OLS)
estimator. Our analysis leads to new bounds on estimating the impulse response
and the Hankel matrix associated with the linear system. We first design an
input excitation and show that Hankel regularization enables one to recover the
system using optimal number of observations in the true system order and
achieve strong statistical estimation rates. Surprisingly, we demonstrate that
the input design indeed matters, by showing that intuitive choices such as
i.i.d. Gaussian input leads to provably sub-optimal sample complexity. To
better understand the benefits of regularization, we also revisit the OLS
estimator. Besides refining existing bounds, we experimentally identify when
regularized approach improves over OLS: (1) For low-order systems with slow
impulse-response decay, OLS method performs poorly in terms of sample
complexity, (2) Hankel matrix returned by regularization has a more clear
singular value gap that ease identification of the system order, (3) Hankel
regularization is less sensitive to hyperparameter choice. Finally, we
establish model selection guarantees through a joint train-validation procedure
where we tune the regularization parameter for near-optimal estimation.
Related papers
- Precision measurement for open systems by non-hermitian linear response [9.087477434347218]
We first derive some general results regarding the lower bound of estimated precision for a dissipative parameter.
This lower bound is related to the correlation of the encoding dissipative operator and the evolution time.
arXiv Detail & Related papers (2024-06-17T07:51:02Z) - A least-square method for non-asymptotic identification in linear switching control [17.938732931331064]
It is known that the underlying partially-observed linear dynamical system lies within a finite collection of known candidate models.
We characterize the finite-time sample complexity of this problem by leveraging recent advances in the non-asymptotic analysis of linear least-square methods.
We propose a data-driven switching strategy that identifies the unknown parameters of the underlying system.
arXiv Detail & Related papers (2024-04-11T20:55:38Z) - An L-BFGS-B approach for linear and nonlinear system identification under $\ell_1$- and group-Lasso regularization [0.0]
We propose a very efficient numerical method for identifying linear and nonlinear discrete-time state-space models.
A Python implementation of the proposed identification method is available in the package jax-sysid.
arXiv Detail & Related papers (2024-03-06T16:17:34Z) - Instance-Dependent Generalization Bounds via Optimal Transport [51.71650746285469]
Existing generalization bounds fail to explain crucial factors that drive the generalization of modern neural networks.
We derive instance-dependent generalization bounds that depend on the local Lipschitz regularity of the learned prediction function in the data space.
We empirically analyze our generalization bounds for neural networks, showing that the bound values are meaningful and capture the effect of popular regularization methods during training.
arXiv Detail & Related papers (2022-11-02T16:39:42Z) - Efficient learning of hidden state LTI state space models of unknown
order [0.7614628596146599]
We address two related estimation problems arising in the setup of hidden state linear time invariant (LTI) state space systems.
Namely, the estimation of any finite number of the system's Markov parameters and the estimation of a minimal realization for the system.
For both problems, we provide statistical guarantees in the form of various estimation error upper bounds, $rank$ recovery conditions, and sample complexity estimates.
arXiv Detail & Related papers (2022-02-03T14:59:13Z) - A Scalable, Adaptive and Sound Nonconvex Regularizer for Low-rank Matrix
Completion [60.52730146391456]
We propose a new non scalable low-rank regularizer called "nuclear Frobenius norm" regularizer, which is adaptive and sound.
It bypasses the computation of singular values and allows fast optimization by algorithms.
It obtains state-of-the-art recovery performance while being the fastest in existing matrix learning methods.
arXiv Detail & Related papers (2020-08-14T18:47:58Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Fast OSCAR and OWL Regression via Safe Screening Rules [97.28167655721766]
Ordered $L_1$ (OWL) regularized regression is a new regression analysis for high-dimensional sparse learning.
Proximal gradient methods are used as standard approaches to solve OWL regression.
We propose the first safe screening rule for OWL regression by exploring the order of the primal solution with the unknown order structure.
arXiv Detail & Related papers (2020-06-29T23:35:53Z) - Adaptive Control and Regret Minimization in Linear Quadratic Gaussian
(LQG) Setting [91.43582419264763]
We propose LqgOpt, a novel reinforcement learning algorithm based on the principle of optimism in the face of uncertainty.
LqgOpt efficiently explores the system dynamics, estimates the model parameters up to their confidence interval, and deploys the controller of the most optimistic model.
arXiv Detail & Related papers (2020-03-12T19:56:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.