Nonparametric Control-Koopman Operator Learning: Flexible and Scalable Models for Prediction and Control
- URL: http://arxiv.org/abs/2405.07312v1
- Date: Sun, 12 May 2024 15:46:52 GMT
- Title: Nonparametric Control-Koopman Operator Learning: Flexible and Scalable Models for Prediction and Control
- Authors: Petar Bevanda, Bas Driessen, Lucian Cristian Iacob, Roland Toth, Stefan Sosnowski, Sandra Hirche,
- Abstract summary: We present a nonparametric framework for learning Koopman operator representations of nonlinear control-affine systems.
We also enhance the scalability of control-Koopman operator estimators by leveraging random projections.
The efficacy of our novel cKOR approach is demonstrated on both forecasting and control tasks.
- Score: 2.7784144651669704
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Linearity of Koopman operators and simplicity of their estimators coupled with model-reduction capabilities has lead to their great popularity in applications for learning dynamical systems. While nonparametric Koopman operator learning in infinite-dimensional reproducing kernel Hilbert spaces is well understood for autonomous systems, its control system analogues are largely unexplored. Addressing systems with control inputs in a principled manner is crucial for fully data-driven learning of controllers, especially since existing approaches commonly resort to representational heuristics or parametric models of limited expressiveness and scalability. We address the aforementioned challenge by proposing a universal framework via control-affine reproducing kernels that enables direct estimation of a single operator even for control systems. The proposed approach, called control-Koopman operator regression (cKOR), is thus completely analogous to Koopman operator regression of the autonomous case. First in the literature, we present a nonparametric framework for learning Koopman operator representations of nonlinear control-affine systems that does not suffer from the curse of control input dimensionality. This allows for reformulating the infinite-dimensional learning problem in a finite-dimensional space based solely on data without apriori loss of precision due to a restriction to a finite span of functions or inputs as in other approaches. For enabling applications to large-scale control systems, we also enhance the scalability of control-Koopman operator estimators by leveraging random projections (sketching). The efficacy of our novel cKOR approach is demonstrated on both forecasting and control tasks.
Related papers
- Deep Koopman Operator with Control for Nonlinear Systems [44.472875714432504]
We propose an end-to-end deep learning framework to learn the Koopman embedding function and Koopman Operator.
We first parameterize the embedding function and Koopman Operator with the neural network and train them end-to-end with the K-steps loss function.
We then design an auxiliary control network to encode the nonlinear state-dependent control term to model the nonlinearity in control input.
arXiv Detail & Related papers (2022-02-16T11:40:36Z) - Towards Data-driven LQR with KoopmanizingFlows [8.133902705930327]
We propose a novel framework for learning linear time-invariant (LTI) models for a class of continuous-time non-autonomous nonlinear dynamics.
We learn a finite representation of the Koopman operator that is linear in controls while concurrently learning meaningful lifting coordinates.
arXiv Detail & Related papers (2022-01-27T17:02:03Z) - Neural Koopman Lyapunov Control [0.0]
We propose a framework to identify and construct stabilizable bilinear control systems and its associated observables from data.
Our proposed approach provides provable guarantees of global stability for the nonlinear control systems with unknown dynamics.
arXiv Detail & Related papers (2022-01-13T17:38:09Z) - Deep Learning Approximation of Diffeomorphisms via Linear-Control
Systems [91.3755431537592]
We consider a control system of the form $dot x = sum_i=1lF_i(x)u_i$, with linear dependence in the controls.
We use the corresponding flow to approximate the action of a diffeomorphism on a compact ensemble of points.
arXiv Detail & Related papers (2021-10-24T08:57:46Z) - Sparsity in Partially Controllable Linear Systems [56.142264865866636]
We study partially controllable linear dynamical systems specified by an underlying sparsity pattern.
Our results characterize those state variables which are irrelevant for optimal control.
arXiv Detail & Related papers (2021-10-12T16:41:47Z) - Discrete-time Contraction-based Control of Nonlinear Systems with
Parametric Uncertainties using Neural Networks [6.804154699470765]
This work develops an approach to discrete-time contraction analysis and control using neural networks.
The methodology involves training a neural network to learn a contraction metric and feedback gain.
The resulting contraction-based controller embeds the trained neural network and is capable of achieving efficient tracking of time-varying references.
arXiv Detail & Related papers (2021-05-12T05:07:34Z) - Data-driven Koopman Operators for Model-based Shared Control of
Human-Machine Systems [66.65503164312705]
We present a data-driven shared control algorithm that can be used to improve a human operator's control of complex machines.
Both the dynamics and information about the user's interaction are learned from observation through the use of a Koopman operator.
We find that model-based shared control significantly improves task and control metrics when compared to a natural learning, or user only, control paradigm.
arXiv Detail & Related papers (2020-06-12T14:14:07Z) - Improving Input-Output Linearizing Controllers for Bipedal Robots via
Reinforcement Learning [85.13138591433635]
The main drawbacks of input-output linearizing controllers are the need for precise dynamics models and not being able to account for input constraints.
In this paper, we address both challenges for the specific case of bipedal robot control by the use of reinforcement learning techniques.
arXiv Detail & Related papers (2020-04-15T18:15:49Z) - Technical Report: Adaptive Control for Linearizable Systems Using
On-Policy Reinforcement Learning [41.24484153212002]
This paper proposes a framework for adaptively learning a feedback linearization-based tracking controller for an unknown system.
It does not require the learned inverse model to be invertible at all instances of time.
A simulated example of a double pendulum demonstrates the utility of the proposed theory.
arXiv Detail & Related papers (2020-04-06T15:50:31Z) - Adaptive Control and Regret Minimization in Linear Quadratic Gaussian
(LQG) Setting [91.43582419264763]
We propose LqgOpt, a novel reinforcement learning algorithm based on the principle of optimism in the face of uncertainty.
LqgOpt efficiently explores the system dynamics, estimates the model parameters up to their confidence interval, and deploys the controller of the most optimistic model.
arXiv Detail & Related papers (2020-03-12T19:56:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.