Stabilization of nonlinear systems with unknown delays via delay-adaptive neural operator approximate predictors
- URL: http://arxiv.org/abs/2509.26443v1
- Date: Tue, 30 Sep 2025 16:00:58 GMT
- Title: Stabilization of nonlinear systems with unknown delays via delay-adaptive neural operator approximate predictors
- Authors: Luke Bhan, Miroslav Krstic, Yuanyuan Shi,
- Abstract summary: This work establishes the first rigorous stability guarantees for approximate predictors in delay-adaptive control of nonlinear systems.<n>We show that neural operators-a flexible class of neural network-based approximators-can achieve arbitrarily small approximation errors.
- Score: 6.093618731228799
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work establishes the first rigorous stability guarantees for approximate predictors in delay-adaptive control of nonlinear systems, addressing a key challenge in practical implementations where exact predictors are unavailable. We analyze two scenarios: (i) when the actuated input is directly measurable, and (ii) when it is estimated online. For the measurable input case, we prove semi-global practical asymptotic stability with an explicit bound proportional to the approximation error $\epsilon$. For the unmeasured input case, we demonstrate local practical asymptotic stability, with the region of attraction explicitly dependent on both the initial delay estimate and the predictor approximation error. To bridge theory and practice, we show that neural operators-a flexible class of neural network-based approximators-can achieve arbitrarily small approximation errors, thus satisfying the conditions of our stability theorems. Numerical experiments on two nonlinear benchmark systems-a biological protein activator/repressor model and a micro-organism growth Chemostat model-validate our theoretical results. In particular, our numerical simulations confirm stability under approximate predictors, highlight the strong generalization capabilities of neural operators, and demonstrate a substantial computational speedup of up to 15x compared to a baseline fixed-point method.
Related papers
- Optimizing Parallel Schemes with Lyapunov Exponents and kNN-LLE Estimation [0.0]
We present a unified analytical-data-driven methodology for identifying, measuring, and reducing such instabilities in inverse parallel solvers.<n>On the theoretical side, we derive stability and bifurcation characterizations of the underlying iterative maps.<n>On the computational side, we introduce a micro-series pipeline based on kNN-driven estimation of the local largest Lyapunov exponent.
arXiv Detail & Related papers (2026-01-20T05:09:52Z) - Revisiting Zeroth-Order Optimization: Minimum-Variance Two-Point Estimators and Directionally Aligned Perturbations [57.179679246370114]
We identify the distribution of random perturbations that minimizes the estimator's variance as the perturbation stepsize tends to zero.<n>Our findings reveal that such desired perturbations can align directionally with the true gradient, instead of maintaining a fixed length.
arXiv Detail & Related papers (2025-10-22T19:06:39Z) - Delay compensation of multi-input distinct delay nonlinear systems via neural operators [5.578049844940438]
We show that if the predictor approximation satisfies a uniform (in time) error bound, semi-global practical stability is correspondingly achieved.<n>For such approximators, the required uniform error bound depends on the desired region of attraction and the number of control inputs in the system.
arXiv Detail & Related papers (2025-09-21T15:46:46Z) - Delay-adaptive Control of Nonlinear Systems with Approximate Neural Operator Predictors [6.093618731228799]
We propose a rigorous method for implementing predictor feedback controllers in nonlinear systems with unknown and arbitrarily long actuator delays.<n>To address the analytically intractable nature of the predictor, we approximate it using a learned neural operator mapping.<n>We provide a theoretical stability analysis based on the universal approximation theorem of neural operators and the transport partial differential equation (PDE) representation of the delay.<n>We then prove, via a Lyapunov-Krasovskii functional, semi-global practical convergence of the dynamical system dependent on the approximation error of the predictor and delay bounds.
arXiv Detail & Related papers (2025-08-28T02:30:53Z) - Certified Neural Approximations of Nonlinear Dynamics [51.01318247729693]
In safety-critical contexts, the use of neural approximations requires formal bounds on their closeness to the underlying system.<n>We propose a novel, adaptive, and parallelizable verification method based on certified first-order models.
arXiv Detail & Related papers (2025-05-21T13:22:20Z) - Neural Operators for Predictor Feedback Control of Nonlinear Delay Systems [3.0248879829045388]
We recast the predictor design as an operator learning problem, and learn the predictor mapping via a neural operator.<n>Under the approximated predictor, we achieve semiglobal practical stability of the closed-loop nonlinear delay system.<n>We demonstrate the approach by controlling a 5-link robotic manipulator with different neural operator models.
arXiv Detail & Related papers (2024-11-28T07:30:26Z) - Statistical Inference for Temporal Difference Learning with Linear Function Approximation [62.69448336714418]
We investigate the statistical properties of Temporal Difference learning with Polyak-Ruppert averaging.<n>We make three significant contributions that improve the current state-of-the-art results.
arXiv Detail & Related papers (2024-10-21T15:34:44Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Relaxing the Constraints on Predictive Coding Models [62.997667081978825]
Predictive coding is an influential theory of cortical function which posits that the principal computation the brain performs is the minimization of prediction errors.
Standard implementations of the algorithm still involve potentially neurally implausible features such as identical forward and backward weights, backward nonlinear derivatives, and 1-1 error unit connectivity.
In this paper, we show that these features are not integral to the algorithm and can be removed either directly or through learning additional sets of parameters with Hebbian update rules without noticeable harm to learning performance.
arXiv Detail & Related papers (2020-10-02T15:21:37Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z) - Balance-Subsampled Stable Prediction [55.13512328954456]
We propose a novel balance-subsampled stable prediction (BSSP) algorithm based on the theory of fractional factorial design.
A design-theoretic analysis shows that the proposed method can reduce the confounding effects among predictors induced by the distribution shift.
Numerical experiments on both synthetic and real-world data sets demonstrate that our BSSP algorithm significantly outperforms the baseline methods for stable prediction across unknown test data.
arXiv Detail & Related papers (2020-06-08T07:01:38Z) - Stable Neural Flows [15.318500611972441]
We introduce a provably stable variant of neural ordinary differential equations (neural ODEs) whose trajectories evolve on an energy functional parametrised by a neural network.
The learning procedure is cast as an optimal control problem, and an approximate solution is proposed based on adjoint sensivity analysis.
arXiv Detail & Related papers (2020-03-18T06:27:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.