Implicit Regularization and Momentum Algorithms in Nonlinearly
Parameterized Adaptive Control and Prediction
- URL: http://arxiv.org/abs/1912.13154v7
- Date: Fri, 29 Sep 2023 22:01:57 GMT
- Title: Implicit Regularization and Momentum Algorithms in Nonlinearly
Parameterized Adaptive Control and Prediction
- Authors: Nicholas M. Boffi, Jean-Jacques E. Slotine
- Abstract summary: We exploit strong connections between classical adaptive nonlinear control techniques and recent progress in machine learning.
We show that there exists considerable untapped potential in algorithm development for both adaptive nonlinear control and adaptive dynamics prediction.
- Score: 13.860437051795419
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Stable concurrent learning and control of dynamical systems is the subject of
adaptive control. Despite being an established field with many practical
applications and a rich theory, much of the development in adaptive control for
nonlinear systems revolves around a few key algorithms. By exploiting strong
connections between classical adaptive nonlinear control techniques and recent
progress in optimization and machine learning, we show that there exists
considerable untapped potential in algorithm development for both adaptive
nonlinear control and adaptive dynamics prediction. We begin by introducing
first-order adaptation laws inspired by natural gradient descent and mirror
descent. We prove that when there are multiple dynamics consistent with the
data, these non-Euclidean adaptation laws implicitly regularize the learned
model. Local geometry imposed during learning thus may be used to select
parameter vectors -- out of the many that will achieve perfect tracking or
prediction -- for desired properties such as sparsity. We apply this result to
regularized dynamics predictor and observer design, and as concrete examples,
we consider Hamiltonian systems, Lagrangian systems, and recurrent neural
networks. We subsequently develop a variational formalism based on the Bregman
Lagrangian. We show that its Euler Lagrange equations lead to natural gradient
and mirror descent-like adaptation laws with momentum, and we recover their
first-order analogues in the infinite friction limit. We illustrate our
analyses with simulations demonstrating our theoretical results.
Related papers
- Receding Hamiltonian-Informed Optimal Neural Control and State Estimation for Closed-Loop Dynamical Systems [4.05766189327054]
Hamiltonian-Informed Optimal Neural (Hion) controllers are a novel class of neural network-based controllers for dynamical systems.
Hion controllers estimate future states and compute optimal control inputs using Pontryagin's Principle.
arXiv Detail & Related papers (2024-11-02T16:06:29Z) - Reinforced Model Predictive Control via Trust-Region Quasi-Newton Policy Optimization [0.0]
We propose a Quasi-Newton training algorithm for policy optimization with a superlinear convergence rate.
A simulation study illustrates that the proposed training algorithm outperforms other algorithms in terms of data efficiency and accuracy.
arXiv Detail & Related papers (2024-05-28T09:16:08Z) - Learning Neural Constitutive Laws From Motion Observations for
Generalizable PDE Dynamics [97.38308257547186]
Many NN approaches learn an end-to-end model that implicitly models both the governing PDE and material models.
We argue that the governing PDEs are often well-known and should be explicitly enforced rather than learned.
We introduce a new framework termed "Neural Constitutive Laws" (NCLaw) which utilizes a network architecture that strictly guarantees standard priors.
arXiv Detail & Related papers (2023-04-27T17:42:24Z) - Adaptive Robust Model Predictive Control via Uncertainty Cancellation [25.736296938185074]
We propose a learning-based robust predictive control algorithm that compensates for significant uncertainty in the dynamics.
We optimize over a class of nonlinear feedback policies inspired by certainty equivalent "estimate-and-cancel" control laws.
arXiv Detail & Related papers (2022-12-02T18:54:23Z) - Learning-enhanced Nonlinear Model Predictive Control using
Knowledge-based Neural Ordinary Differential Equations and Deep Ensembles [5.650647159993238]
In this work, we leverage deep learning tools, namely knowledge-based neural ordinary differential equations (KNODE) and deep ensembles, to improve the prediction accuracy of a model predictive control (MPC)
In particular, we learn an ensemble of KNODE models, which we refer to as the KNODE ensemble, to obtain an accurate prediction of the true system dynamics.
We show that the KNODE ensemble provides more accurate predictions and illustrate the efficacy and closed-loop performance of the proposed nonlinear MPC framework.
arXiv Detail & Related papers (2022-11-24T23:51:18Z) - Neural ODEs as Feedback Policies for Nonlinear Optimal Control [1.8514606155611764]
We use Neural ordinary differential equations (Neural ODEs) to model continuous time dynamics as differential equations parametrized with neural networks.
We propose the use of a neural control policy posed as a Neural ODE to solve general nonlinear optimal control problems.
arXiv Detail & Related papers (2022-10-20T13:19:26Z) - Physics-Inspired Temporal Learning of Quadrotor Dynamics for Accurate
Model Predictive Trajectory Tracking [76.27433308688592]
Accurately modeling quadrotor's system dynamics is critical for guaranteeing agile, safe, and stable navigation.
We present a novel Physics-Inspired Temporal Convolutional Network (PI-TCN) approach to learning quadrotor's system dynamics purely from robot experience.
Our approach combines the expressive power of sparse temporal convolutions and dense feed-forward connections to make accurate system predictions.
arXiv Detail & Related papers (2022-06-07T13:51:35Z) - Deformable Linear Object Prediction Using Locally Linear Latent Dynamics [51.740998379872195]
Prediction of deformable objects (e.g., rope) is challenging due to their non-linear dynamics and infinite-dimensional configuration spaces.
We learn a locally linear, action-conditioned dynamics model that can be used to predict future latent states.
We empirically demonstrate that our approach can predict the rope state accurately up to ten steps into the future.
arXiv Detail & Related papers (2021-03-26T00:29:31Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z) - Logarithmic Regret Bound in Partially Observable Linear Dynamical
Systems [91.43582419264763]
We study the problem of system identification and adaptive control in partially observable linear dynamical systems.
We present the first model estimation method with finite-time guarantees in both open and closed-loop system identification.
We show that AdaptOn is the first algorithm that achieves $textpolylogleft(Tright)$ regret in adaptive control of unknown partially observable linear dynamical systems.
arXiv Detail & Related papers (2020-03-25T06:00:33Z) - Adaptive Control and Regret Minimization in Linear Quadratic Gaussian
(LQG) Setting [91.43582419264763]
We propose LqgOpt, a novel reinforcement learning algorithm based on the principle of optimism in the face of uncertainty.
LqgOpt efficiently explores the system dynamics, estimates the model parameters up to their confidence interval, and deploys the controller of the most optimistic model.
arXiv Detail & Related papers (2020-03-12T19:56:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.