Learning Stable Koopman Embeddings
- URL: http://arxiv.org/abs/2110.06509v1
- Date: Wed, 13 Oct 2021 05:44:13 GMT
- Title: Learning Stable Koopman Embeddings
- Authors: Fletcher Fan, Bowen Yi, David Rye, Guodong Shi, Ian R. Manchester
- Abstract summary: We present a new data-driven method for learning stable models of nonlinear systems.
We prove that every discrete-time nonlinear contracting model can be learnt in our framework.
- Score: 9.239657838690228
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we present a new data-driven method for learning stable models
of nonlinear systems. Our model lifts the original state space to a
higher-dimensional linear manifold using Koopman embeddings. Interestingly, we
prove that every discrete-time nonlinear contracting model can be learnt in our
framework. Another significant merit of the proposed approach is that it allows
for unconstrained optimization over the Koopman embedding and operator jointly
while enforcing stability of the model, via a direct parameterization of stable
linear systems, greatly simplifying the computations involved. We validate our
method on a simulated system and analyze the advantages of our parameterization
compared to alternatives.
Related papers
- Towards Stable Machine Learning Model Retraining via Slowly Varying Sequences [6.067007470552307]
We propose a methodology for finding sequences of machine learning models that are stable across retraining iterations.
We develop a mixed-integer optimization formulation that is guaranteed to recover optimal models.
Our method shows stronger stability than greedily trained models with a small, controllable sacrifice in predictive power.
arXiv Detail & Related papers (2024-03-28T22:45:38Z) - Data-driven Nonlinear Model Reduction using Koopman Theory: Integrated
Control Form and NMPC Case Study [56.283944756315066]
We propose generic model structures combining delay-coordinate encoding of measurements and full-state decoding to integrate reduced Koopman modeling and state estimation.
A case study demonstrates that our approach provides accurate control models and enables real-time capable nonlinear model predictive control of a high-purity cryogenic distillation column.
arXiv Detail & Related papers (2024-01-09T11:54:54Z) - Learned Lifted Linearization Applied to Unstable Dynamic Systems Enabled
by Koopman Direct Encoding [11.650381752104296]
It is known that DMD and other data-driven methods face a fundamental difficulty in constructing a Koopman model when applied to unstable systems.
Here we solve the problem by incorporating knowledge about a nonlinear state equation with a learning method for finding an effective set of observables.
The proposed method shows a dramatic improvement over existing DMD and data-driven methods.
arXiv Detail & Related papers (2022-10-24T20:55:46Z) - Numerically Stable Sparse Gaussian Processes via Minimum Separation
using Cover Trees [57.67528738886731]
We study the numerical stability of scalable sparse approximations based on inducing points.
For low-dimensional tasks such as geospatial modeling, we propose an automated method for computing inducing points satisfying these conditions.
arXiv Detail & Related papers (2022-10-14T15:20:17Z) - Time varying regression with hidden linear dynamics [74.9914602730208]
We revisit a model for time-varying linear regression that assumes the unknown parameters evolve according to a linear dynamical system.
Counterintuitively, we show that when the underlying dynamics are stable the parameters of this model can be estimated from data by combining just two ordinary least squares estimates.
arXiv Detail & Related papers (2021-12-29T23:37:06Z) - Learning over All Stabilizing Nonlinear Controllers for a
Partially-Observed Linear System [4.3012765978447565]
We propose a parameterization of nonlinear output feedback controllers for linear dynamical systems.
Our approach guarantees the closed-loop stability of partially observable linear dynamical systems without requiring any constraints to be satisfied.
arXiv Detail & Related papers (2021-12-08T10:43:47Z) - Self-Supervised Hybrid Inference in State-Space Models [0.0]
We perform approximate inference in state-space models that allow for nonlinear higher-order Markov chains in latent space.
We do not rely on an additional parameterization of the generative model or supervision via uncorrupted observations or ground truth latent states.
We obtain competitive results on the chaotic Lorenz system compared to a fully supervised approach and outperform a method based on variational inference.
arXiv Detail & Related papers (2021-07-28T13:26:14Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - Learning Unstable Dynamics with One Minute of Data: A
Differentiation-based Gaussian Process Approach [47.045588297201434]
We show how to exploit the differentiability of Gaussian processes to create a state-dependent linearized approximation of the true continuous dynamics.
We validate our approach by iteratively learning the system dynamics of an unstable system such as a 9-D segway.
arXiv Detail & Related papers (2021-03-08T05:08:47Z) - GELATO: Geometrically Enriched Latent Model for Offline Reinforcement
Learning [54.291331971813364]
offline reinforcement learning approaches can be divided into proximal and uncertainty-aware methods.
In this work, we demonstrate the benefit of combining the two in a latent variational model.
Our proposed metrics measure both the quality of out of distribution samples as well as the discrepancy of examples in the data.
arXiv Detail & Related papers (2021-02-22T19:42:40Z) - ImitationFlow: Learning Deep Stable Stochastic Dynamic Systems by
Normalizing Flows [29.310742141970394]
We introduce ImitationFlow, a novel Deep generative model that allows learning complex globally stable, nonlinear dynamics.
We show the effectiveness of our method with both standard datasets and a real robot experiment.
arXiv Detail & Related papers (2020-10-25T14:49:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.