On the Local Quadratic Stability of T-S Fuzzy Systems in the Vicinity of
the Origin
- URL: http://arxiv.org/abs/2309.06841v2
- Date: Thu, 14 Sep 2023 02:22:54 GMT
- Title: On the Local Quadratic Stability of T-S Fuzzy Systems in the Vicinity of
the Origin
- Authors: Donghwan Lee and Do Wan Kim
- Abstract summary: The main goal of this paper is to introduce new local stability conditions for continuous-time Takagi-Sugeno (T-S) fuzzy systems.
These stability conditions are based on linear matrix inequalities (LMIs) in combination with quadratic Lyapunov functions.
- Score: 7.191780076353627
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The main goal of this paper is to introduce new local stability conditions
for continuous-time Takagi-Sugeno (T-S) fuzzy systems. These stability
conditions are based on linear matrix inequalities (LMIs) in combination with
quadratic Lyapunov functions. Moreover, they integrate information on the
membership functions at the origin and effectively leverage the linear
structure of the underlying nonlinear system in the vicinity of the origin. As
a result, the proposed conditions are proved to be less conservative compared
to existing methods using fuzzy Lyapunov functions in the literature. Moreover,
we establish that the proposed methods offer necessary and sufficient
conditions for the local exponential stability of T-S fuzzy systems. The paper
also includes discussions on the inherent limitations associated with fuzzy
Lyapunov approaches. To demonstrate the theoretical results, we provide
comprehensive examples that elucidate the core concepts and validate the
efficacy of the proposed conditions.
Related papers
- Local Stability and Region of Attraction Analysis for Neural Network Feedback Systems under Positivity Constraints [0.0]
We study the local stability of nonlinear systems in the Lur'e form with static nonlinear feedback realized by feedforward neural networks (FFNNs)<n>By leveraging positivity system constraints, we employ a localized variant of the Aizerman conjecture, which provides sufficient conditions for exponential stability of trajectories confined to a compact set.
arXiv Detail & Related papers (2025-05-28T21:45:49Z) - Finite-time stabilization of ladder multi-level quantum systems [3.188406620942066]
A novel continuous non-smooth control strategy is proposed to achieve finite-time stabilization of ladder quantum systems.<n>We first design a universal fractional-order control law for a ladder n-level quantum system using a distance-based Lyapunov function.<n>We derive an upper bound of the time required for convergence to an eigenstate of the intrinsic Hamiltonian.
arXiv Detail & Related papers (2025-05-18T08:33:42Z) - Statistical Inference for Temporal Difference Learning with Linear Function Approximation [62.69448336714418]
Temporal Difference (TD) learning, arguably the most widely used for policy evaluation, serves as a natural framework for this purpose.
In this paper, we study the consistency properties of TD learning with Polyak-Ruppert averaging and linear function approximation, and obtain three significant improvements over existing results.
arXiv Detail & Related papers (2024-10-21T15:34:44Z) - Lyapunov-stable Neural Control for State and Output Feedback: A Novel Formulation [67.63756749551924]
Learning-based neural network (NN) control policies have shown impressive empirical performance in a wide range of tasks in robotics and control.
Lyapunov stability guarantees over the region-of-attraction (ROA) for NN controllers with nonlinear dynamical systems are challenging to obtain.
We demonstrate a new framework for learning NN controllers together with Lyapunov certificates using fast empirical falsification and strategic regularizations.
arXiv Detail & Related papers (2024-04-11T17:49:15Z) - Distributionally Robust Policy and Lyapunov-Certificate Learning [13.38077406934971]
Key challenge in designing controllers with stability guarantees for uncertain systems is the accurate determination of and adaptation to shifts in model parametric uncertainty during online deployment.
We tackle this with a novel distributionally robust formulation of the Lyapunov derivative chance constraint ensuring a monotonic decrease of the Lyapunov certificate.
We show that, for the resulting closed-loop system, the global stability of its equilibrium can be certified with high confidence, even with Out-of-Distribution uncertainties.
arXiv Detail & Related papers (2024-04-03T18:57:54Z) - Stability-Certified Learning of Control Systems with Quadratic
Nonlinearities [9.599029891108229]
This work primarily focuses on an operator inference methodology aimed at constructing low-dimensional dynamical models.
Our main objective is to develop a method that facilitates the inference of quadratic control dynamical systems with inherent stability guarantees.
arXiv Detail & Related papers (2024-03-01T16:26:47Z) - Stochastic Subgradient Methods with Guaranteed Global Stability in Nonsmooth Nonconvex Optimization [3.0586855806896045]
We first investigate a general framework for subgradient methods, where the corresponding differential inclusion admits a coercive Lyapunov function.
We develop an improved analysis to apply proposed framework to establish the global stability of a wide range of subgradient methods, where the corresponding Lyapunov functions are possibly non-coercive.
arXiv Detail & Related papers (2023-07-19T15:26:18Z) - KCRL: Krasovskii-Constrained Reinforcement Learning with Guaranteed
Stability in Nonlinear Dynamical Systems [66.9461097311667]
We propose a model-based reinforcement learning framework with formal stability guarantees.
The proposed method learns the system dynamics up to a confidence interval using feature representation.
We show that KCRL is guaranteed to learn a stabilizing policy in a finite number of interactions with the underlying unknown system.
arXiv Detail & Related papers (2022-06-03T17:27:04Z) - Robust Stability of Neural-Network Controlled Nonlinear Systems with
Parametric Variability [2.0199917525888895]
We develop a theory for stability and stabilizability of a class of neural-network controlled nonlinear systems.
For computing such a robust stabilizing NN controller, a stability guaranteed training (SGT) is also proposed.
arXiv Detail & Related papers (2021-09-13T05:09:30Z) - Pointwise Feasibility of Gaussian Process-based Safety-Critical Control
under Model Uncertainty [77.18483084440182]
Control Barrier Functions (CBFs) and Control Lyapunov Functions (CLFs) are popular tools for enforcing safety and stability of a controlled system, respectively.
We present a Gaussian Process (GP)-based approach to tackle the problem of model uncertainty in safety-critical controllers that use CBFs and CLFs.
arXiv Detail & Related papers (2021-06-13T23:08:49Z) - On the Stability of Nonlinear Receding Horizon Control: A Geometric
Perspective [72.7951562665449]
widespread adoption of nonlinear Receding Control (RHC) strategies by industry has to more than 30 years.
This paper takes the first step towards understanding the role of global geometry in the role of global-based control.
arXiv Detail & Related papers (2021-03-27T22:59:37Z) - Stability-Certified Reinforcement Learning via Spectral Normalization [1.2179548969182574]
Two types of methods from different perspectives are described for ensuring the stability of a system controlled by a neural network.
The spectral normalization proposed in this article improves the feasibility of the a-posteriori stability test by constructing tighter local sectors.
arXiv Detail & Related papers (2020-12-26T14:26:24Z) - A Dynamical Systems Approach for Convergence of the Bayesian EM
Algorithm [59.99439951055238]
We show how (discrete-time) Lyapunov stability theory can serve as a powerful tool to aid, or even lead, in the analysis (and potential design) of optimization algorithms that are not necessarily gradient-based.
The particular ML problem that this paper focuses on is that of parameter estimation in an incomplete-data Bayesian framework via the popular optimization algorithm known as maximum a posteriori expectation-maximization (MAP-EM)
We show that fast convergence (linear or quadratic) is achieved, which could have been difficult to unveil without our adopted S&C approach.
arXiv Detail & Related papers (2020-06-23T01:34:18Z) - Fine-Grained Analysis of Stability and Generalization for Stochastic
Gradient Descent [55.85456985750134]
We introduce a new stability measure called on-average model stability, for which we develop novel bounds controlled by the risks of SGD iterates.
This yields generalization bounds depending on the behavior of the best model, and leads to the first-ever-known fast bounds in the low-noise setting.
To our best knowledge, this gives the firstever-known stability and generalization for SGD with even non-differentiable loss functions.
arXiv Detail & Related papers (2020-06-15T06:30:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.