On the Hardness of Learning to Stabilize Linear Systems
- URL: http://arxiv.org/abs/2311.11151v1
- Date: Sat, 18 Nov 2023 19:34:56 GMT
- Title: On the Hardness of Learning to Stabilize Linear Systems
- Authors: Xiong Zeng, Zexiang Liu, Zhe Du, Necmiye Ozay, Mario Sznaier
- Abstract summary: We study the statistical hardness of learning to stabilize linear time-invariant systems.
We present a class of systems that can be easy to identify, thanks to a non-degenerate noise process.
We tie this result to the hardness of co-stabilizability for this class of systems using ideas from robust control.
- Score: 4.962316236417777
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Inspired by the work of Tsiamis et al. \cite{tsiamis2022learning}, in this
paper we study the statistical hardness of learning to stabilize linear
time-invariant systems. Hardness is measured by the number of samples required
to achieve a learning task with a given probability. The work in
\cite{tsiamis2022learning} shows that there exist system classes that are hard
to learn to stabilize with the core reason being the hardness of
identification. Here we present a class of systems that can be easy to
identify, thanks to a non-degenerate noise process that excites all modes, but
the sample complexity of stabilization still increases exponentially with the
system dimension. We tie this result to the hardness of co-stabilizability for
this class of systems using ideas from robust control.
Related papers
- Learning Controlled Stochastic Differential Equations [61.82896036131116]
This work proposes a novel method for estimating both drift and diffusion coefficients of continuous, multidimensional, nonlinear controlled differential equations with non-uniform diffusion.
We provide strong theoretical guarantees, including finite-sample bounds for (L2), (Linfty), and risk metrics, with learning rates adaptive to coefficients' regularity.
Our method is available as an open-source Python library.
arXiv Detail & Related papers (2024-11-04T11:09:58Z) - System stabilization with policy optimization on unstable latent manifolds [0.5261718469769449]
The proposed approach stabilizes even complex physical systems from few data samples.
Experiments demonstrate that the proposed approach stabilizes even complex physical systems from few data samples.
arXiv Detail & Related papers (2024-07-08T21:57:28Z) - Learning to Control Linear Systems can be Hard [19.034920102339573]
We study the statistical difficulty of learning to control linear systems.
We prove that learning complexity can be at most exponential with the controllability index, that is the degree of underactuation.
arXiv Detail & Related papers (2022-05-27T15:07:30Z) - Joint Learning-Based Stabilization of Multiple Unknown Linear Systems [3.453777970395065]
We propose a novel joint learning-based stabilization algorithm for quickly learning stabilizing policies for all systems understudy.
The presented procedure is shown to be notably effective such that it stabilizes the family of dynamical systems in an extremely short time period.
arXiv Detail & Related papers (2022-01-01T15:30:44Z) - Bayesian Algorithms Learn to Stabilize Unknown Continuous-Time Systems [0.0]
Linear dynamical systems are canonical models for learning-based control of plants with uncertain dynamics.
A reliable stabilization procedure for this purpose that can effectively learn from unstable data to stabilize the system in a finite time is not currently available.
In this work, we propose a novel learning algorithm that stabilizes unknown continuous-time linear systems.
arXiv Detail & Related papers (2021-12-30T15:31:35Z) - Adversarially Robust Stability Certificates can be Sample-Efficient [14.658040519472646]
We consider learning adversarially robust stability certificates for unknown nonlinear dynamical systems.
We show that the statistical cost of learning an adversarial stability certificate is equivalent, up to constant factors, to that of learning a nominal stability certificate.
arXiv Detail & Related papers (2021-12-20T17:23:31Z) - Contrastive learning of strong-mixing continuous-time stochastic
processes [53.82893653745542]
Contrastive learning is a family of self-supervised methods where a model is trained to solve a classification task constructed from unlabeled data.
We show that a properly constructed contrastive learning task can be used to estimate the transition kernel for small-to-mid-range intervals in the diffusion case.
arXiv Detail & Related papers (2021-03-03T23:06:47Z) - Training Generative Adversarial Networks by Solving Ordinary
Differential Equations [54.23691425062034]
We study the continuous-time dynamics induced by GAN training.
From this perspective, we hypothesise that instabilities in training GANs arise from the integration error.
We experimentally verify that well-known ODE solvers (such as Runge-Kutta) can stabilise training.
arXiv Detail & Related papers (2020-10-28T15:23:49Z) - Reinforcement Learning with Fast Stabilization in Linear Dynamical
Systems [91.43582419264763]
We study model-based reinforcement learning (RL) in unknown stabilizable linear dynamical systems.
We propose an algorithm that certifies fast stabilization of the underlying system by effectively exploring the environment.
We show that the proposed algorithm attains $tildemathcalO(sqrtT)$ regret after $T$ time steps of agent-environment interaction.
arXiv Detail & Related papers (2020-07-23T23:06:40Z) - Active Learning for Nonlinear System Identification with Guarantees [102.43355665393067]
We study a class of nonlinear dynamical systems whose state transitions depend linearly on a known feature embedding of state-action pairs.
We propose an active learning approach that achieves this by repeating three steps: trajectory planning, trajectory tracking, and re-estimation of the system from all available data.
We show that our method estimates nonlinear dynamical systems at a parametric rate, similar to the statistical rate of standard linear regression.
arXiv Detail & Related papers (2020-06-18T04:54:11Z) - Learning Stable Deep Dynamics Models [91.90131512825504]
We propose an approach for learning dynamical systems that are guaranteed to be stable over the entire state space.
We show that such learning systems are able to model simple dynamical systems and can be combined with additional deep generative models to learn complex dynamics.
arXiv Detail & Related papers (2020-01-17T00:04:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.