A Framework for Adaptive Stabilisation of Nonlinear Stochastic Systems
- URL: http://arxiv.org/abs/2511.17436v1
- Date: Fri, 21 Nov 2025 17:33:00 GMT
- Title: A Framework for Adaptive Stabilisation of Nonlinear Stochastic Systems
- Authors: Seth Siriya, Jingge Zhu, Dragan Nešić, Ye Pu,
- Abstract summary: We consider the adaptive control problem for discrete-time, nonlinear systems with linearly parameterised uncertainty.<n>We propose a certainty equivalence learning-based adaptive control strategy, and derive stability bounds on a closed-loop system that hold for some probabilities.<n>We show that if the entire state space is informative, and the family of controllers is globally stabilising with appropriately chosen parameters, high probability stability guarantees can be derived.
- Score: 10.266286487433584
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the adaptive control problem for discrete-time, nonlinear stochastic systems with linearly parameterised uncertainty. Assuming access to a parameterised family of controllers that can stabilise the system in a bounded set within an informative region of the state space when the parameter is well-chosen, we propose a certainty equivalence learning-based adaptive control strategy, and subsequently derive stability bounds on the closed-loop system that hold for some probabilities. We then show that if the entire state space is informative, and the family of controllers is globally stabilising with appropriately chosen parameters, high probability stability guarantees can be derived.
Related papers
- Non-Asymptotic Bounds for Closed-Loop Identification of Unstable Nonlinear Stochastic Systems [5.102311052155507]
We consider the problem of least squares parameter estimation from single-trajectory data.<n>We establish non-asymptotic guarantees on the estimation error at times where the state trajectory evolves in this region.<n>If the whole state space is informative, high probability guarantees on the error hold for all times.
arXiv Detail & Related papers (2024-12-05T13:45:35Z) - Stability Bounds for Learning-Based Adaptive Control of Discrete-Time
Multi-Dimensional Stochastic Linear Systems with Input Constraints [3.8004168340068336]
We consider the problem of adaptive stabilization for discrete-time, multi-dimensional systems with bounded control input constraints and unbounded disturbances.
We propose a certainty-equivalent control scheme which combines online parameter estimation with saturated linear control.
arXiv Detail & Related papers (2023-04-02T16:38:13Z) - KCRL: Krasovskii-Constrained Reinforcement Learning with Guaranteed
Stability in Nonlinear Dynamical Systems [66.9461097311667]
We propose a model-based reinforcement learning framework with formal stability guarantees.
The proposed method learns the system dynamics up to a confidence interval using feature representation.
We show that KCRL is guaranteed to learn a stabilizing policy in a finite number of interactions with the underlying unknown system.
arXiv Detail & Related papers (2022-06-03T17:27:04Z) - Robust Stability of Neural-Network Controlled Nonlinear Systems with
Parametric Variability [2.0199917525888895]
We develop a theory for stability and stabilizability of a class of neural-network controlled nonlinear systems.
For computing such a robust stabilizing NN controller, a stability guaranteed training (SGT) is also proposed.
arXiv Detail & Related papers (2021-09-13T05:09:30Z) - Pointwise Feasibility of Gaussian Process-based Safety-Critical Control
under Model Uncertainty [77.18483084440182]
Control Barrier Functions (CBFs) and Control Lyapunov Functions (CLFs) are popular tools for enforcing safety and stability of a controlled system, respectively.
We present a Gaussian Process (GP)-based approach to tackle the problem of model uncertainty in safety-critical controllers that use CBFs and CLFs.
arXiv Detail & Related papers (2021-06-13T23:08:49Z) - Improper Learning with Gradient-based Policy Optimization [62.50997487685586]
We consider an improper reinforcement learning setting where the learner is given M base controllers for an unknown Markov Decision Process.
We propose a gradient-based approach that operates over a class of improper mixtures of the controllers.
arXiv Detail & Related papers (2021-02-16T14:53:55Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z) - Learning Stabilizing Controllers for Unstable Linear Quadratic
Regulators from a Single Trajectory [85.29718245299341]
We study linear controllers under quadratic costs model also known as linear quadratic regulators (LQR)
We present two different semi-definite programs (SDP) which results in a controller that stabilizes all systems within an ellipsoid uncertainty set.
We propose an efficient data dependent algorithm -- textsceXploration -- that with high probability quickly identifies a stabilizing controller.
arXiv Detail & Related papers (2020-06-19T08:58:57Z) - Adaptive Control and Regret Minimization in Linear Quadratic Gaussian
(LQG) Setting [91.43582419264763]
We propose LqgOpt, a novel reinforcement learning algorithm based on the principle of optimism in the face of uncertainty.
LqgOpt efficiently explores the system dynamics, estimates the model parameters up to their confidence interval, and deploys the controller of the most optimistic model.
arXiv Detail & Related papers (2020-03-12T19:56:38Z) - Optimistic robust linear quadratic dual control [4.94950858749529]
We present a dual control strategy that attempts to combine the performance of certainty equivalence, with the practical utility of robustness.
The formulation preserves structure in the representation of parametric uncertainty, which allows the controller to target reduction of uncertainty in the parameters that matter most' for the control task.
arXiv Detail & Related papers (2019-12-31T02:02:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.