Sliding Mode Learning Control of Uncertain Nonlinear Systems with
Lyapunov Stability Analysis
- URL: http://arxiv.org/abs/2103.11274v1
- Date: Sun, 21 Mar 2021 01:03:04 GMT
- Title: Sliding Mode Learning Control of Uncertain Nonlinear Systems with
Lyapunov Stability Analysis
- Authors: Erkan Kayacan
- Abstract summary: The stability of the sliding mode learning algorithm was proven in literature.
The stability of the overall system is proven for nth-order uncertain nonlinear systems.
The developed SMLC algorithm can learn the system behavior in the absence of any mathematical model knowledge.
- Score: 3.2996723916635267
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper addresses to Sliding Mode Learning Control (SMLC) of uncertain
nonlinear systems with Lyapunov stability analysis. In the control scheme, a
conventional control term is used to provide the system stability in compact
space while a Type-2 Neuro-Fuzzy Controller (T2NFC) learns system behavior so
that the T2NFC takes the overall control of the system completely in a very
short time period. The stability of the sliding mode learning algorithm was
proven in literature; however, it is so restrictive for systems without the
overall system stability. To address this shortcoming, a novel control
structure with a novel sliding surface is proposed in this paper and the
stability of the overall system is proven for nth-order uncertain nonlinear
systems. To investigate the capability and effectiveness of the proposed
learning and control algorithms, the simulation studies have been achieved
under noisy conditions. The simulation results confirm that the developed SMLC
algorithm can learn the system behavior in the absence of any mathematical
model knowledge and exhibit robust control performance against external
disturbances.
Related papers
- Learning to Boost the Performance of Stable Nonlinear Systems [0.0]
We tackle the performance-boosting problem with closed-loop stability guarantees.
Our methods enable learning over arbitrarily deep neural network classes of performance-boosting controllers for stable nonlinear systems.
arXiv Detail & Related papers (2024-05-01T21:11:29Z) - Learning Over Contracting and Lipschitz Closed-Loops for
Partially-Observed Nonlinear Systems (Extended Version) [1.2430809884830318]
This paper presents a policy parameterization for learning-based control on nonlinear, partially-observed dynamical systems.
We prove that the resulting Youla-REN parameterization automatically satisfies stability (contraction) and user-tunable robustness (Lipschitz) conditions.
We find that the Youla-REN performs similarly to existing learning-based and optimal control methods while also ensuring stability and exhibiting improved robustness to adversarial disturbances.
arXiv Detail & Related papers (2023-04-12T23:55:56Z) - Improving the Performance of Robust Control through Event-Triggered
Learning [74.57758188038375]
We propose an event-triggered learning algorithm that decides when to learn in the face of uncertainty in the LQR problem.
We demonstrate improved performance over a robust controller baseline in a numerical example.
arXiv Detail & Related papers (2022-07-28T17:36:37Z) - Neural Koopman Lyapunov Control [0.0]
We propose a framework to identify and construct stabilizable bilinear control systems and its associated observables from data.
Our proposed approach provides provable guarantees of global stability for the nonlinear control systems with unknown dynamics.
arXiv Detail & Related papers (2022-01-13T17:38:09Z) - Bayesian Algorithms Learn to Stabilize Unknown Continuous-Time Systems [0.0]
Linear dynamical systems are canonical models for learning-based control of plants with uncertain dynamics.
A reliable stabilization procedure for this purpose that can effectively learn from unstable data to stabilize the system in a finite time is not currently available.
In this work, we propose a novel learning algorithm that stabilizes unknown continuous-time linear systems.
arXiv Detail & Related papers (2021-12-30T15:31:35Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z) - Stable Online Control of Linear Time-Varying Systems [49.41696101740271]
COCO-LQ is an efficient online control algorithm that guarantees input-to-state stability for a large class of LTV systems.
We empirically demonstrate the performance of COCO-LQ in both synthetic experiments and a power system frequency control example.
arXiv Detail & Related papers (2021-04-29T06:18:49Z) - The Impact of Data on the Stability of Learning-Based Control- Extended
Version [63.97366815968177]
We propose a Lyapunov-based measure for quantifying the impact of data on the certifiable control performance.
By modeling unknown system dynamics through Gaussian processes, we can determine the interrelation between model uncertainty and satisfaction of stability conditions.
arXiv Detail & Related papers (2020-11-20T19:10:01Z) - Robust Model-Free Learning and Control without Prior Knowledge [1.14219428942199]
We present a model-free control algorithm that robustly learn and stabilize an unknown discrete-time linear system.
The controller does not require any prior knowledge of the system dynamics, disturbances, or noise.
We will conclude with simulation results that show that despite the generality and simplicity, the controller demonstrates good closed-loop performance.
arXiv Detail & Related papers (2020-10-01T05:43:33Z) - Learning Stabilizing Controllers for Unstable Linear Quadratic
Regulators from a Single Trajectory [85.29718245299341]
We study linear controllers under quadratic costs model also known as linear quadratic regulators (LQR)
We present two different semi-definite programs (SDP) which results in a controller that stabilizes all systems within an ellipsoid uncertainty set.
We propose an efficient data dependent algorithm -- textsceXploration -- that with high probability quickly identifies a stabilizing controller.
arXiv Detail & Related papers (2020-06-19T08:58:57Z) - Adaptive Control and Regret Minimization in Linear Quadratic Gaussian
(LQG) Setting [91.43582419264763]
We propose LqgOpt, a novel reinforcement learning algorithm based on the principle of optimism in the face of uncertainty.
LqgOpt efficiently explores the system dynamics, estimates the model parameters up to their confidence interval, and deploys the controller of the most optimistic model.
arXiv Detail & Related papers (2020-03-12T19:56:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.