Near-optimal control of dynamical systems with neural ordinary
differential equations
- URL: http://arxiv.org/abs/2206.11120v1
- Date: Wed, 22 Jun 2022 14:11:11 GMT
- Title: Near-optimal control of dynamical systems with neural ordinary
differential equations
- Authors: Lucas B\"ottcher and Thomas Asikis
- Abstract summary: Recent advances in deep learning and neural network-based optimization have contributed to the development of methods that can help solve control problems involving high-dimensional dynamical systems.
We first analyze how truncated and non-truncated backpropagation through time affect runtime performance and the ability of neural networks to learn optimal control functions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Optimal control problems naturally arise in many scientific applications
where one wishes to steer a dynamical system from a certain initial state
$\mathbf{x}_0$ to a desired target state $\mathbf{x}^*$ in finite time $T$.
Recent advances in deep learning and neural network-based optimization have
contributed to the development of methods that can help solve control problems
involving high-dimensional dynamical systems. In particular, the framework of
neural ordinary differential equations (neural ODEs) provides an efficient
means to iteratively approximate continuous time control functions associated
with analytically intractable and computationally demanding control tasks.
Although neural ODE controllers have shown great potential in solving complex
control problems, the understanding of the effects of hyperparameters such as
network structure and optimizers on learning performance is still very limited.
Our work aims at addressing some of these knowledge gaps to conduct efficient
hyperparameter optimization. To this end, we first analyze how truncated and
non-truncated backpropagation through time affect runtime performance and the
ability of neural networks to learn optimal control functions. Using analytical
and numerical methods, we then study the role of parameter initializations,
optimizers, and neural-network architecture. Finally, we connect our results to
the ability of neural ODE controllers to implicitly regularize control energy.
Related papers
- Neural Control: Concurrent System Identification and Control Learning with Neural ODE [13.727727205587804]
We propose a neural ODE based method for controlling unknown dynamical systems, denoted as Neural Control (NC)
Our model concurrently learns system dynamics as well as optimal controls that guides towards target states.
Our experiments demonstrate the effectiveness of our model for learning optimal control of unknown dynamical systems.
arXiv Detail & Related papers (2024-01-03T17:05:17Z) - Model-Based Control with Sparse Neural Dynamics [23.961218902837807]
We propose a new framework for integrated model learning and predictive control.
We show that our framework can deliver better closed-loop performance than existing state-of-the-art methods.
arXiv Detail & Related papers (2023-12-20T06:25:02Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - On Robust Numerical Solver for ODE via Self-Attention Mechanism [82.95493796476767]
We explore training efficient and robust AI-enhanced numerical solvers with a small data size by mitigating intrinsic noise disturbances.
We first analyze the ability of the self-attention mechanism to regulate noise in supervised learning and then propose a simple-yet-effective numerical solver, Attr, which introduces an additive self-attention mechanism to the numerical solution of differential equations.
arXiv Detail & Related papers (2023-02-05T01:39:21Z) - The least-control principle for learning at equilibrium [65.2998274413952]
We present a new principle for learning equilibrium recurrent neural networks, deep equilibrium models, or meta-learning.
Our results shed light on how the brain might learn and offer new ways of approaching a broad class of machine learning problems.
arXiv Detail & Related papers (2022-07-04T11:27:08Z) - Implicit energy regularization of neural ordinary-differential-equation
control [3.5880535198436156]
We present a versatile neural ordinary-differential-equation control (NODEC) framework with implicit energy regularization.
We show that NODEC can steer dynamical systems towards a desired target state within a predefined amount of time.
arXiv Detail & Related papers (2021-03-11T08:28:15Z) - Neural Network Approximations of Compositional Functions With
Applications to Dynamical Systems [3.660098145214465]
We develop an approximation theory for compositional functions and their neural network approximations.
We identify a set of key features of compositional functions and the relationship between the features and the complexity of neural networks.
In addition to function approximations, we prove several formulae of error upper bounds for neural networks.
arXiv Detail & Related papers (2020-12-03T04:40:25Z) - Learning to Control PDEs with Differentiable Physics [102.36050646250871]
We present a novel hierarchical predictor-corrector scheme which enables neural networks to learn to understand and control complex nonlinear physical systems over long time frames.
We demonstrate that our method successfully develops an understanding of complex physical systems and learns to control them for tasks involving PDEs.
arXiv Detail & Related papers (2020-01-21T11:58:41Z) - Optimizing Wireless Systems Using Unsupervised and
Reinforced-Unsupervised Deep Learning [96.01176486957226]
Resource allocation and transceivers in wireless networks are usually designed by solving optimization problems.
In this article, we introduce unsupervised and reinforced-unsupervised learning frameworks for solving both variable and functional optimization problems.
arXiv Detail & Related papers (2020-01-03T11:01:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.