Finite-time Identification of Stable Linear Systems: Optimality of the
Least-Squares Estimator
- URL: http://arxiv.org/abs/2003.07937v3
- Date: Thu, 26 Mar 2020 17:54:55 GMT
- Title: Finite-time Identification of Stable Linear Systems: Optimality of the
Least-Squares Estimator
- Authors: Yassir Jedra and Alexandre Proutiere
- Abstract summary: We present a new finite-time analysis of the estimation error of the Ordinary Least Squares (OLS) estimator for stable linear time-invariant systems.
We characterize the number of observed samples sufficient for the OLS estimator to be $(varepsilon,delta)$-PAC, i.e., to yield an estimation error less than $varepsilon$ with probability at least $1-delta$.
- Score: 79.3239137440876
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a new finite-time analysis of the estimation error of the Ordinary
Least Squares (OLS) estimator for stable linear time-invariant systems. We
characterize the number of observed samples (the length of the observed
trajectory) sufficient for the OLS estimator to be $(\varepsilon,\delta)$-PAC,
i.e., to yield an estimation error less than $\varepsilon$ with probability at
least $1-\delta$. We show that this number matches existing sample complexity
lower bounds [1,2] up to universal multiplicative factors (independent of
($\varepsilon,\delta)$ and of the system). This paper hence establishes the
optimality of the OLS estimator for stable systems, a result conjectured in
[1]. Our analysis of the performance of the OLS estimator is simpler, sharper,
and easier to interpret than existing analyses. It relies on new concentration
results for the covariates matrix.
Related papers
- Efficient Certificates of Anti-Concentration Beyond Gaussians [15.709968227246453]
This work presents a new (and arguably the most natural) formulation for anti-concentration.
We give quasi-polynomial time verifiable sum-of-squares certificates of anti-concentration that hold for a wide class of non-Gaussian distributions.
Our approach constructs a canonical integer program for anti-concentration and analysis a sum-of-squares relaxation of it, independent of the intended application.
arXiv Detail & Related papers (2024-05-23T22:13:44Z) - Low-Rank Approximation of Structural Redundancy for Self-Supervised Learning [2.3072402651280517]
We study the data-generating mechanism for reconstructive SSL to shed light on its effectiveness.
With an infinite amount of labeled samples, we provide a sufficient and necessary condition for perfect linear approximation.
Motivated by the condition, we propose to approximate the redundant component by a low-rank factorization.
arXiv Detail & Related papers (2024-02-10T04:45:27Z) - A Specialized Semismooth Newton Method for Kernel-Based Optimal
Transport [92.96250725599958]
Kernel-based optimal transport (OT) estimators offer an alternative, functional estimation procedure to address OT problems from samples.
We show that our SSN method achieves a global convergence rate of $O (1/sqrtk)$, and a local quadratic convergence rate under standard regularity conditions.
arXiv Detail & Related papers (2023-10-21T18:48:45Z) - Outlier-Robust Sparse Mean Estimation for Heavy-Tailed Distributions [42.6763105645717]
Given a small number of corrupted samples, the goal is to efficiently compute a hypothesis that accurately approximates $mu$ with high probability.
Our algorithm achieves the optimal error using a number of samples scaling logarithmically with the ambient dimension.
Our analysis may be of independent interest, involving the delicate design of a (non-spectral) decomposition for positive semi-definite satisfying certain sparsity properties.
arXiv Detail & Related papers (2022-11-29T16:13:50Z) - Asymptotically Unbiased Instance-wise Regularized Partial AUC
Optimization: Theory and Algorithm [101.44676036551537]
One-way Partial AUC (OPAUC) and Two-way Partial AUC (TPAUC) measures the average performance of a binary classifier.
Most of the existing methods could only optimize PAUC approximately, leading to inevitable biases that are not controllable.
We present a simpler reformulation of the PAUC problem via distributional robust optimization AUC.
arXiv Detail & Related papers (2022-10-08T08:26:22Z) - Provably Auditing Ordinary Least Squares in Low Dimensions [17.655785504931913]
Most metrics measure stability of conclusions derived from Ordinary Least Squares linear regression.
Recent work proposes a simple, global, finite-sample stability metric: the minimum number of samples that need to be removed so that rerunning the analysis overturns the conclusion.
We show that in the low-dimensional regime where the number of co variables is a constant but the number of samples is large, there are efficient algorithms for provably estimating this metric.
arXiv Detail & Related papers (2022-05-28T00:45:10Z) - Efficient learning of hidden state LTI state space models of unknown
order [0.7614628596146599]
We address two related estimation problems arising in the setup of hidden state linear time invariant (LTI) state space systems.
Namely, the estimation of any finite number of the system's Markov parameters and the estimation of a minimal realization for the system.
For both problems, we provide statistical guarantees in the form of various estimation error upper bounds, $rank$ recovery conditions, and sample complexity estimates.
arXiv Detail & Related papers (2022-02-03T14:59:13Z) - Heavy-tailed Streaming Statistical Estimation [58.70341336199497]
We consider the task of heavy-tailed statistical estimation given streaming $p$ samples.
We design a clipped gradient descent and provide an improved analysis under a more nuanced condition on the noise of gradients.
arXiv Detail & Related papers (2021-08-25T21:30:27Z) - Stochastic Approximation for Online Tensorial Independent Component
Analysis [98.34292831923335]
Independent component analysis (ICA) has been a popular dimension reduction tool in statistical machine learning and signal processing.
In this paper, we present a by-product online tensorial algorithm that estimates for each independent component.
arXiv Detail & Related papers (2020-12-28T18:52:37Z) - Optimal Robust Linear Regression in Nearly Linear Time [97.11565882347772]
We study the problem of high-dimensional robust linear regression where a learner is given access to $n$ samples from the generative model $Y = langle X,w* rangle + epsilon$
We propose estimators for this problem under two settings: (i) $X$ is L4-L2 hypercontractive, $mathbbE [XXtop]$ has bounded condition number and $epsilon$ has bounded variance and (ii) $X$ is sub-Gaussian with identity second moment and $epsilon$ is
arXiv Detail & Related papers (2020-07-16T06:44:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.