Neural ODEs as Feedback Policies for Nonlinear Optimal Control
- URL: http://arxiv.org/abs/2210.11245v1
- Date: Thu, 20 Oct 2022 13:19:26 GMT
- Title: Neural ODEs as Feedback Policies for Nonlinear Optimal Control
- Authors: Ilya Orson Sandoval, Panagiotis Petsagkourakis, Ehecatl Antonio del
Rio-Chanona
- Abstract summary: We use Neural ordinary differential equations (Neural ODEs) to model continuous time dynamics as differential equations parametrized with neural networks.
We propose the use of a neural control policy posed as a Neural ODE to solve general nonlinear optimal control problems.
- Score: 1.8514606155611764
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural ordinary differential equations (Neural ODEs) model continuous time
dynamics as differential equations parametrized with neural networks. Thanks to
their modeling flexibility, they have been adopted for multiple tasks where the
continuous time nature of the process is specially relevant, as in system
identification and time series analysis. When applied in a control setting, it
is possible to adapt their use to approximate optimal nonlinear feedback
policies. This formulation follows the same approach as policy gradients in
reinforcement learning, covering the case where the environment consists of
known deterministic dynamics given by a system of differential equations. The
white box nature of the model specification allows the direct calculation of
policy gradients through sensitivity analysis, avoiding the inexact and
inefficient gradient estimation through sampling. In this work we propose the
use of a neural control policy posed as a Neural ODE to solve general nonlinear
optimal control problems while satisfying both state and control constraints,
which are crucial for real world scenarios. Since the state feedback policy
partially modifies the model dynamics, the whole space phase of the system is
reshaped upon the optimization. This approach is a sensible approximation to
the historically intractable closed loop solution of nonlinear control problems
that efficiently exploits the availability of a dynamical system model.
Related papers
- Receding Hamiltonian-Informed Optimal Neural Control and State Estimation for Closed-Loop Dynamical Systems [4.05766189327054]
Hamiltonian-Informed Optimal Neural (Hion) controllers are a novel class of neural network-based controllers for dynamical systems.
Hion controllers estimate future states and compute optimal control inputs using Pontryagin's Principle.
arXiv Detail & Related papers (2024-11-02T16:06:29Z) - Real-time optimal control of high-dimensional parametrized systems by deep learning-based reduced order models [3.5161229331588095]
We propose a non-intrusive Deep Learning-based Reduced Order Modeling (DL-ROM) technique for the rapid control of systems described in terms of parametrized PDEs in multiple scenarios.
After (i) data generation, (ii) dimensionality reduction, and (iii) neural networks training in the offline phase, optimal control strategies can be rapidly retrieved in an online phase for any scenario of interest.
arXiv Detail & Related papers (2024-09-09T15:20:24Z) - Two-Stage ML-Guided Decision Rules for Sequential Decision Making under Uncertainty [55.06411438416805]
Sequential Decision Making under Uncertainty (SDMU) is ubiquitous in many domains such as energy, finance, and supply chains.
Some SDMU are naturally modeled as Multistage Problems (MSPs) but the resulting optimizations are notoriously challenging from a computational standpoint.
This paper introduces a novel approach Two-Stage General Decision Rules (TS-GDR) to generalize the policy space beyond linear functions.
The effectiveness of TS-GDR is demonstrated through an instantiation using Deep Recurrent Neural Networks named Two-Stage Deep Decision Rules (TS-LDR)
arXiv Detail & Related papers (2024-05-23T18:19:47Z) - Neural Time-Reversed Generalized Riccati Equation [60.92253836775246]
Hamiltonian equations offer an interpretation of optimality through auxiliary variables known as costates.
This paper introduces a novel neural-based approach to optimal control, with the aim of working forward-in-time.
arXiv Detail & Related papers (2023-12-14T19:29:37Z) - A Neural RDE approach for continuous-time non-Markovian stochastic
control problems [4.155942878350882]
We propose a novel framework for continuous-time non-Markovian control problems by means of neural rough differential equations (Neural RDEs)
Non-Markovianity naturally arises in control problems due to the time delay effects in the system coefficients or the driving noises.
By modelling the control process as the solution of a Neural RDE driven by the state process, we show that the control-state joint dynamics are governed by an uncontrolled, augmented Neural RDE.
arXiv Detail & Related papers (2023-06-25T14:30:33Z) - Learning-enhanced Nonlinear Model Predictive Control using
Knowledge-based Neural Ordinary Differential Equations and Deep Ensembles [5.650647159993238]
In this work, we leverage deep learning tools, namely knowledge-based neural ordinary differential equations (KNODE) and deep ensembles, to improve the prediction accuracy of a model predictive control (MPC)
In particular, we learn an ensemble of KNODE models, which we refer to as the KNODE ensemble, to obtain an accurate prediction of the true system dynamics.
We show that the KNODE ensemble provides more accurate predictions and illustrate the efficacy and closed-loop performance of the proposed nonlinear MPC framework.
arXiv Detail & Related papers (2022-11-24T23:51:18Z) - Learning Stochastic Parametric Differentiable Predictive Control
Policies [2.042924346801313]
We present a scalable alternative called parametric differentiable predictive control (SP-DPC) for unsupervised learning of neural control policies.
SP-DPC is formulated as a deterministic approximation to the parametric constrained optimal control problem.
We provide theoretical probabilistic guarantees for policies learned via the SP-DPC method on closed-loop constraints and chance satisfaction.
arXiv Detail & Related papers (2022-03-02T22:46:32Z) - A Priori Denoising Strategies for Sparse Identification of Nonlinear
Dynamical Systems: A Comparative Study [68.8204255655161]
We investigate and compare the performance of several local and global smoothing techniques to a priori denoise the state measurements.
We show that, in general, global methods, which use the entire measurement data set, outperform local methods, which employ a neighboring data subset around a local point.
arXiv Detail & Related papers (2022-01-29T23:31:25Z) - Neural ODE Processes [64.10282200111983]
We introduce Neural ODE Processes (NDPs), a new class of processes determined by a distribution over Neural ODEs.
We show that our model can successfully capture the dynamics of low-dimensional systems from just a few data-points.
arXiv Detail & Related papers (2021-03-23T09:32:06Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z) - Time Dependence in Non-Autonomous Neural ODEs [74.78386661760662]
We propose a novel family of Neural ODEs with time-varying weights.
We outperform previous Neural ODE variants in both speed and representational capacity.
arXiv Detail & Related papers (2020-05-05T01:41:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.