Inference of Continuous Linear Systems from Data with Guaranteed
Stability
- URL: http://arxiv.org/abs/2301.10060v1
- Date: Tue, 24 Jan 2023 15:04:19 GMT
- Title: Inference of Continuous Linear Systems from Data with Guaranteed
Stability
- Authors: Pawan Goyal and Igor Pontes Duff and Peter Benner
- Abstract summary: This research focuses on learning continuous linear models from data.
We leverage the parameterization of stable matrices proposed in [Gillis/Sharma, Automatica, 2017] to realize the desired models.
To avoid the estimation of derivative information to learn continuous systems, we formulate the inference problem in an integral form.
- Score: 7.067529286680843
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine-learning technologies for learning dynamical systems from data play
an important role in engineering design. This research focuses on learning
continuous linear models from data. Stability, a key feature of dynamic
systems, is especially important in design tasks such as prediction and
control. Thus, there is a need to develop methodologies that provide stability
guarantees. To that end, we leverage the parameterization of stable matrices
proposed in [Gillis/Sharma, Automatica, 2017] to realize the desired models.
Furthermore, to avoid the estimation of derivative information to learn
continuous systems, we formulate the inference problem in an integral form. We
also discuss a few extensions, including those related to control systems.
Numerical experiments show that the combination of a stable matrix
parameterization and an integral form of differential equations allows us to
learn stable systems without requiring derivative information, which can be
challenging to obtain in situations with noisy or limited data.
Related papers
- Learning Controlled Stochastic Differential Equations [61.82896036131116]
This work proposes a novel method for estimating both drift and diffusion coefficients of continuous, multidimensional, nonlinear controlled differential equations with non-uniform diffusion.
We provide strong theoretical guarantees, including finite-sample bounds for (L2), (Linfty), and risk metrics, with learning rates adaptive to coefficients' regularity.
Our method is available as an open-source Python library.
arXiv Detail & Related papers (2024-11-04T11:09:58Z) - Learning to Boost the Performance of Stable Nonlinear Systems [0.0]
We tackle the performance-boosting problem with closed-loop stability guarantees.
Our methods enable learning over arbitrarily deep neural network classes of performance-boosting controllers for stable nonlinear systems.
arXiv Detail & Related papers (2024-05-01T21:11:29Z) - Stability-Certified Learning of Control Systems with Quadratic
Nonlinearities [9.599029891108229]
This work primarily focuses on an operator inference methodology aimed at constructing low-dimensional dynamical models.
Our main objective is to develop a method that facilitates the inference of quadratic control dynamical systems with inherent stability guarantees.
arXiv Detail & Related papers (2024-03-01T16:26:47Z) - Data-Driven Control with Inherent Lyapunov Stability [3.695480271934742]
We propose Control with Inherent Lyapunov Stability (CoILS) as a method for jointly learning parametric representations of a nonlinear dynamics model and a stabilizing controller from data.
In addition to the stabilizability of the learned dynamics guaranteed by our novel construction, we show that the learned controller stabilizes the true dynamics under certain assumptions on the fidelity of the learned dynamics.
arXiv Detail & Related papers (2023-03-06T14:21:42Z) - On Robust Numerical Solver for ODE via Self-Attention Mechanism [82.95493796476767]
We explore training efficient and robust AI-enhanced numerical solvers with a small data size by mitigating intrinsic noise disturbances.
We first analyze the ability of the self-attention mechanism to regulate noise in supervised learning and then propose a simple-yet-effective numerical solver, Attr, which introduces an additive self-attention mechanism to the numerical solution of differential equations.
arXiv Detail & Related papers (2023-02-05T01:39:21Z) - Capturing Actionable Dynamics with Structured Latent Ordinary
Differential Equations [68.62843292346813]
We propose a structured latent ODE model that captures system input variations within its latent representation.
Building on a static variable specification, our model learns factors of variation for each input to the system, thus separating the effects of the system inputs in the latent space.
arXiv Detail & Related papers (2022-02-25T20:00:56Z) - Supervised DKRC with Images for Offline System Identification [77.34726150561087]
Modern dynamical systems are becoming increasingly non-linear and complex.
There is a need for a framework to model these systems in a compact and comprehensive representation for prediction and control.
Our approach learns these basis functions using a supervised learning approach.
arXiv Detail & Related papers (2021-09-06T04:39:06Z) - Using Data Assimilation to Train a Hybrid Forecast System that Combines
Machine-Learning and Knowledge-Based Components [52.77024349608834]
We consider the problem of data-assisted forecasting of chaotic dynamical systems when the available data is noisy partial measurements.
We show that by using partial measurements of the state of the dynamical system, we can train a machine learning model to improve predictions made by an imperfect knowledge-based model.
arXiv Detail & Related papers (2021-02-15T19:56:48Z) - Knowledge-Based Learning of Nonlinear Dynamics and Chaos [3.673994921516517]
We present a universal learning framework for extracting predictive models from nonlinear systems based on observations.
Our framework can readily incorporate first principle knowledge because it naturally models nonlinear systems as continuous-time systems.
arXiv Detail & Related papers (2020-10-07T13:50:13Z) - Active Learning for Nonlinear System Identification with Guarantees [102.43355665393067]
We study a class of nonlinear dynamical systems whose state transitions depend linearly on a known feature embedding of state-action pairs.
We propose an active learning approach that achieves this by repeating three steps: trajectory planning, trajectory tracking, and re-estimation of the system from all available data.
We show that our method estimates nonlinear dynamical systems at a parametric rate, similar to the statistical rate of standard linear regression.
arXiv Detail & Related papers (2020-06-18T04:54:11Z) - Learning Stable Nonparametric Dynamical Systems with Gaussian Process
Regression [9.126353101382607]
We learn a nonparametric Lyapunov function based on Gaussian process regression from data.
We prove that stabilization of the nominal model based on the nonparametric control Lyapunov function does not modify the behavior of the nominal model at training samples.
arXiv Detail & Related papers (2020-06-14T11:17:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.