Learning to Control Linear Systems can be Hard
- URL: http://arxiv.org/abs/2205.14035v1
- Date: Fri, 27 May 2022 15:07:30 GMT
- Title: Learning to Control Linear Systems can be Hard
- Authors: Anastasios Tsiamis, Ingvar Ziemann, Manfred Morari, Nikolai Matni,
George J. Pappas
- Abstract summary: We study the statistical difficulty of learning to control linear systems.
We prove that learning complexity can be at most exponential with the controllability index, that is the degree of underactuation.
- Score: 19.034920102339573
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study the statistical difficulty of learning to control
linear systems. We focus on two standard benchmarks, the sample complexity of
stabilization, and the regret of the online learning of the Linear Quadratic
Regulator (LQR). Prior results state that the statistical difficulty for both
benchmarks scales polynomially with the system state dimension up to
system-theoretic quantities. However, this does not reveal the whole picture.
By utilizing minimax lower bounds for both benchmarks, we prove that there
exist non-trivial classes of systems for which learning complexity scales
dramatically, i.e. exponentially, with the system dimension. This situation
arises in the case of underactuated systems, i.e. systems with fewer inputs
than states. Such systems are structurally difficult to control and their
system theoretic quantities can scale exponentially with the system dimension
dominating learning complexity. Under some additional structural assumptions
(bounding systems away from uncontrollability), we provide qualitatively
matching upper bounds. We prove that learning complexity can be at most
exponential with the controllability index of the system, that is the degree of
underactuation.
Related papers
- Learning Controlled Stochastic Differential Equations [61.82896036131116]
This work proposes a novel method for estimating both drift and diffusion coefficients of continuous, multidimensional, nonlinear controlled differential equations with non-uniform diffusion.
We provide strong theoretical guarantees, including finite-sample bounds for (L2), (Linfty), and risk metrics, with learning rates adaptive to coefficients' regularity.
Our method is available as an open-source Python library.
arXiv Detail & Related papers (2024-11-04T11:09:58Z) - On the Hardness of Learning to Stabilize Linear Systems [4.962316236417777]
We study the statistical hardness of learning to stabilize linear time-invariant systems.
We present a class of systems that can be easy to identify, thanks to a non-degenerate noise process.
We tie this result to the hardness of co-stabilizability for this class of systems using ideas from robust control.
arXiv Detail & Related papers (2023-11-18T19:34:56Z) - Finite Sample Identification of Bilinear Dynamical Systems [29.973598501311233]
We show how to estimate the unknown bilinear system up to a desired accuracy with high probability.
Our sample complexity and statistical error rates are optimal in terms of the trajectory length, the dimensionality of the system and the input size.
arXiv Detail & Related papers (2022-08-29T22:34:22Z) - Online Control of Unknown Time-Varying Dynamical Systems [48.75672260851758]
We study online control of time-varying linear systems with unknown dynamics in the nonstochastic control model.
We study regret bounds with respect to common classes of policies: Disturbance Action (SLS), Disturbance Response (Youla), and linear feedback policies.
arXiv Detail & Related papers (2022-02-16T06:57:14Z) - Regret Lower Bounds for Learning Linear Quadratic Gaussian Systems [6.261682379939611]
We derive regret lower bounds exhibiting scaling on the order of magnitude $sqrtT$ in the time horizon $T$.
Our bounds accurately capture the role of control-theoretic parameters and we are able to show that systems that are hard to control are also hard to learn to control.
arXiv Detail & Related papers (2022-01-05T16:19:16Z) - Sparsity in Partially Controllable Linear Systems [56.142264865866636]
We study partially controllable linear dynamical systems specified by an underlying sparsity pattern.
Our results characterize those state variables which are irrelevant for optimal control.
arXiv Detail & Related papers (2021-10-12T16:41:47Z) - Linear Systems can be Hard to Learn [12.834033463883085]
Statistically easy to learn linear system classes have sample complexity that is with the system dimension.
Statistically hard to learn linear system classes have worst-case sample complexity that is at least exponential with the system dimension.
arXiv Detail & Related papers (2021-04-02T15:58:30Z) - Controlling nonlinear dynamical systems into arbitrary states using
machine learning [77.34726150561087]
We propose a novel and fully data driven control scheme which relies on machine learning (ML)
Exploiting recently developed ML-based prediction capabilities of complex systems, we demonstrate that nonlinear systems can be forced to stay in arbitrary dynamical target states coming from any initial state.
Having this highly flexible control scheme with little demands on the amount of required data on hand, we briefly discuss possible applications that range from engineering to medicine.
arXiv Detail & Related papers (2021-02-23T16:58:26Z) - Learning Partially Observed Linear Dynamical Systems from Logarithmic
Number of Samples [4.7464518249313805]
We study the problem of learning partially observed linear dynamical systems from a single sample trajectory.
Our result significantly improves the sample complexity of learning partially observed linear dynamical systems.
arXiv Detail & Related papers (2020-10-08T14:23:48Z) - Active Learning for Nonlinear System Identification with Guarantees [102.43355665393067]
We study a class of nonlinear dynamical systems whose state transitions depend linearly on a known feature embedding of state-action pairs.
We propose an active learning approach that achieves this by repeating three steps: trajectory planning, trajectory tracking, and re-estimation of the system from all available data.
We show that our method estimates nonlinear dynamical systems at a parametric rate, similar to the statistical rate of standard linear regression.
arXiv Detail & Related papers (2020-06-18T04:54:11Z) - Combining Machine Learning with Knowledge-Based Modeling for Scalable
Forecasting and Subgrid-Scale Closure of Large, Complex, Spatiotemporal
Systems [48.7576911714538]
We attempt to utilize machine learning as the essential tool for integrating pasttemporal data into predictions.
We propose combining two approaches: (i) a parallel machine learning prediction scheme; and (ii) a hybrid technique, for a composite prediction system composed of a knowledge-based component and a machine-learning-based component.
We demonstrate that not only can this method combining (i) and (ii) be scaled to give excellent performance for very large systems, but also that the length of time series data needed to train our multiple, parallel machine learning components is dramatically less than that necessary without parallelization.
arXiv Detail & Related papers (2020-02-10T23:21:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.