Random Features Approximation for Control-Affine Systems
- URL: http://arxiv.org/abs/2406.06514v2
- Date: Tue, 11 Jun 2024 02:32:16 GMT
- Title: Random Features Approximation for Control-Affine Systems
- Authors: Kimia Kazemian, Yahya Sattar, Sarah Dean,
- Abstract summary: We propose two novel classes of nonlinear feature representations which capture control affine structure.
Our methods make use of random features (RF) approximations, inheriting the expressiveness of kernel methods at a lower computational cost.
- Score: 6.067043299145924
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern data-driven control applications call for flexible nonlinear models that are amenable to principled controller synthesis and realtime feedback. Many nonlinear dynamical systems of interest are control affine. We propose two novel classes of nonlinear feature representations which capture control affine structure while allowing for arbitrary complexity in the state dependence. Our methods make use of random features (RF) approximations, inheriting the expressiveness of kernel methods at a lower computational cost. We formalize the representational capabilities of our methods by showing their relationship to the Affine Dot Product (ADP) kernel proposed by Casta\~neda et al. (2021) and a novel Affine Dense (AD) kernel that we introduce. We further illustrate the utility by presenting a case study of data-driven optimization-based control using control certificate functions (CCF). Simulation experiments on a double pendulum empirically demonstrate the advantages of our methods.
Related papers
- Learning Controlled Stochastic Differential Equations [61.82896036131116]
This work proposes a novel method for estimating both drift and diffusion coefficients of continuous, multidimensional, nonlinear controlled differential equations with non-uniform diffusion.
We provide strong theoretical guarantees, including finite-sample bounds for (L2), (Linfty), and risk metrics, with learning rates adaptive to coefficients' regularity.
Our method is available as an open-source Python library.
arXiv Detail & Related papers (2024-11-04T11:09:58Z) - Data-Driven Optimal Feedback Laws via Kernel Mean Embeddings [3.007066256364399]
We introduce kernel mean embeddings (KMEs) to identify the Markov transition operators associated with controlled diffusion processes.
Unlike traditional dynamic programming methods, our approach exploits the Kernel trick'' to break the curse of dimensionality.
We demonstrate the effectiveness of our method through numerical examples, highlighting its ability to solve a large class of nonlinear optimal control problems.
arXiv Detail & Related papers (2024-07-23T11:53:03Z) - Provable Guarantees for Generative Behavior Cloning: Bridging Low-Level
Stability and High-Level Behavior [51.60683890503293]
We propose a theoretical framework for studying behavior cloning of complex expert demonstrations using generative modeling.
We show that pure supervised cloning can generate trajectories matching the per-time step distribution of arbitrary expert trajectories.
arXiv Detail & Related papers (2023-07-27T04:27:26Z) - Sample-efficient Model-based Reinforcement Learning for Quantum Control [0.2999888908665658]
We propose a model-based reinforcement learning (RL) approach for noisy time-dependent gate optimization.
We show an order of magnitude advantage in the sample complexity of our method over standard model-free RL.
Our algorithm is well suited for controlling partially characterised one and two qubit systems.
arXiv Detail & Related papers (2023-04-19T15:05:19Z) - Learning Control-Oriented Dynamical Structure from Data [25.316358215670274]
We discuss a state-dependent nonlinear tracking controller formulation for general nonlinear control-affine systems.
We empirically demonstrate the efficacy of learned versions of this controller in stable trajectory tracking.
arXiv Detail & Related papers (2023-02-06T02:01:38Z) - Model-Based Reinforcement Learning via Stochastic Hybrid Models [39.83837705993256]
This paper adopts a hybrid-system view of nonlinear modeling and control.
We consider a sequence modeling paradigm that captures the temporal structure of the data.
We show that these time-series models naturally admit a closed-loop extension that we use to extract local feedback controllers.
arXiv Detail & Related papers (2021-11-11T14:05:46Z) - Deep Learning Approximation of Diffeomorphisms via Linear-Control
Systems [91.3755431537592]
We consider a control system of the form $dot x = sum_i=1lF_i(x)u_i$, with linear dependence in the controls.
We use the corresponding flow to approximate the action of a diffeomorphism on a compact ensemble of points.
arXiv Detail & Related papers (2021-10-24T08:57:46Z) - DEALIO: Data-Efficient Adversarial Learning for Imitation from
Observation [57.358212277226315]
In imitation learning from observation IfO, a learning agent seeks to imitate a demonstrating agent using only observations of the demonstrated behavior without access to the control signals generated by the demonstrator.
Recent methods based on adversarial imitation learning have led to state-of-the-art performance on IfO problems, but they typically suffer from high sample complexity due to a reliance on data-inefficient, model-free reinforcement learning algorithms.
This issue makes them impractical to deploy in real-world settings, where gathering samples can incur high costs in terms of time, energy, and risk.
We propose a more data-efficient IfO algorithm
arXiv Detail & Related papers (2021-03-31T23:46:32Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z) - Technical Report: Adaptive Control for Linearizable Systems Using
On-Policy Reinforcement Learning [41.24484153212002]
This paper proposes a framework for adaptively learning a feedback linearization-based tracking controller for an unknown system.
It does not require the learned inverse model to be invertible at all instances of time.
A simulated example of a double pendulum demonstrates the utility of the proposed theory.
arXiv Detail & Related papers (2020-04-06T15:50:31Z) - Formal Synthesis of Lyapunov Neural Networks [61.79595926825511]
We propose an automatic and formally sound method for synthesising Lyapunov functions.
We employ a counterexample-guided approach where a numerical learner and a symbolic verifier interact to construct provably correct Lyapunov neural networks.
Our method synthesises Lyapunov functions faster and over wider spatial domains than the alternatives, yet providing stronger or equal guarantees.
arXiv Detail & Related papers (2020-03-19T17:21:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.