Delay-adaptive Control of Nonlinear Systems with Approximate Neural Operator Predictors
- URL: http://arxiv.org/abs/2508.20367v1
- Date: Thu, 28 Aug 2025 02:30:53 GMT
- Title: Delay-adaptive Control of Nonlinear Systems with Approximate Neural Operator Predictors
- Authors: Luke Bhan, Miroslav Krstic, Yuanyuan Shi,
- Abstract summary: We propose a rigorous method for implementing predictor feedback controllers in nonlinear systems with unknown and arbitrarily long actuator delays.<n>To address the analytically intractable nature of the predictor, we approximate it using a learned neural operator mapping.<n>We provide a theoretical stability analysis based on the universal approximation theorem of neural operators and the transport partial differential equation (PDE) representation of the delay.<n>We then prove, via a Lyapunov-Krasovskii functional, semi-global practical convergence of the dynamical system dependent on the approximation error of the predictor and delay bounds.
- Score: 6.093618731228799
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we propose a rigorous method for implementing predictor feedback controllers in nonlinear systems with unknown and arbitrarily long actuator delays. To address the analytically intractable nature of the predictor, we approximate it using a learned neural operator mapping. This mapping is trained once, offline, and then deployed online, leveraging the fast inference capabilities of neural networks. We provide a theoretical stability analysis based on the universal approximation theorem of neural operators and the transport partial differential equation (PDE) representation of the delay. We then prove, via a Lyapunov-Krasovskii functional, semi-global practical convergence of the dynamical system dependent on the approximation error of the predictor and delay bounds. Finally, we validate our theoretical results using a biological activator/repressor system, demonstrating speedups of 15 times compared to traditional numerical methods.
Related papers
- Upper Approximation Bounds for Neural Oscillators [8.075776288865907]
Theory of quantifying the capacities of neural network architectures remains a significant challenge.<n>This study considers the neural oscillator consisting of a second-order ODE followed by a multilayer perceptron.<n>Results provide a robust theoretical foundation for the effective application of the neural oscillator in science and engineering.
arXiv Detail & Related papers (2025-11-30T18:20:40Z) - Stabilization of nonlinear systems with unknown delays via delay-adaptive neural operator approximate predictors [6.093618731228799]
This work establishes the first rigorous stability guarantees for approximate predictors in delay-adaptive control of nonlinear systems.<n>We show that neural operators-a flexible class of neural network-based approximators-can achieve arbitrarily small approximation errors.
arXiv Detail & Related papers (2025-09-30T16:00:58Z) - Delay compensation of multi-input distinct delay nonlinear systems via neural operators [5.578049844940438]
We show that if the predictor approximation satisfies a uniform (in time) error bound, semi-global practical stability is correspondingly achieved.<n>For such approximators, the required uniform error bound depends on the desired region of attraction and the number of control inputs in the system.
arXiv Detail & Related papers (2025-09-21T15:46:46Z) - Certified Neural Approximations of Nonlinear Dynamics [52.79163248326912]
In safety-critical contexts, the use of neural approximations requires formal bounds on their closeness to the underlying system.<n>We propose a novel, adaptive, and parallelizable verification method based on certified first-order models.
arXiv Detail & Related papers (2025-05-21T13:22:20Z) - Uncertainty propagation in feed-forward neural network models [3.987067170467799]
We develop new uncertainty propagation methods for feed-forward neural network architectures.<n>We derive analytical expressions for the probability density function (PDF) of the neural network output.<n>A key finding is that an appropriate linearization of the leaky ReLU activation function yields accurate statistical results.
arXiv Detail & Related papers (2025-03-27T00:16:36Z) - Neural Operators for Predictor Feedback Control of Nonlinear Delay Systems [3.0248879829045388]
We recast the predictor design as an operator learning problem, and learn the predictor mapping via a neural operator.<n>Under the approximated predictor, we achieve semiglobal practical stability of the closed-loop nonlinear delay system.<n>We demonstrate the approach by controlling a 5-link robotic manipulator with different neural operator models.
arXiv Detail & Related papers (2024-11-28T07:30:26Z) - A Theoretical Overview of Neural Contraction Metrics for Learning-based
Control with Guaranteed Stability [7.963506386866862]
This paper presents a neural network model of an optimal contraction metric and corresponding differential Lyapunov function.
Its innovation lies in providing formal robustness guarantees for learning-based control frameworks.
arXiv Detail & Related papers (2021-10-02T00:28:49Z) - Incorporating NODE with Pre-trained Neural Differential Operator for
Learning Dynamics [73.77459272878025]
We propose to enhance the supervised signal in learning dynamics by pre-training a neural differential operator (NDO)
NDO is pre-trained on a class of symbolic functions, and it learns the mapping between the trajectory samples of these functions to their derivatives.
We provide theoretical guarantee on that the output of NDO can well approximate the ground truth derivatives by proper tuning the complexity of the library.
arXiv Detail & Related papers (2021-06-08T08:04:47Z) - Neural Dynamic Mode Decomposition for End-to-End Modeling of Nonlinear
Dynamics [49.41640137945938]
We propose a neural dynamic mode decomposition for estimating a lift function based on neural networks.
With our proposed method, the forecast error is backpropagated through the neural networks and the spectral decomposition.
Our experiments demonstrate the effectiveness of our proposed method in terms of eigenvalue estimation and forecast performance.
arXiv Detail & Related papers (2020-12-11T08:34:26Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Lipschitz Recurrent Neural Networks [100.72827570987992]
We show that our Lipschitz recurrent unit is more robust with respect to input and parameter perturbations as compared to other continuous-time RNNs.
Our experiments demonstrate that the Lipschitz RNN can outperform existing recurrent units on a range of benchmark tasks.
arXiv Detail & Related papers (2020-06-22T08:44:52Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.