Local Stability and Region of Attraction Analysis for Neural Network Feedback Systems under Positivity Constraints
- URL: http://arxiv.org/abs/2505.22889v1
- Date: Wed, 28 May 2025 21:45:49 GMT
- Title: Local Stability and Region of Attraction Analysis for Neural Network Feedback Systems under Positivity Constraints
- Authors: Hamidreza Montazeri Hedesh, Moh Kamalul Wafi, Milad Siami,
- Abstract summary: We study the local stability of nonlinear systems in the Lur'e form with static nonlinear feedback realized by feedforward neural networks (FFNNs)<n>By leveraging positivity system constraints, we employ a localized variant of the Aizerman conjecture, which provides sufficient conditions for exponential stability of trajectories confined to a compact set.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the local stability of nonlinear systems in the Lur'e form with static nonlinear feedback realized by feedforward neural networks (FFNNs). By leveraging positivity system constraints, we employ a localized variant of the Aizerman conjecture, which provides sufficient conditions for exponential stability of trajectories confined to a compact set. Using this foundation, we develop two distinct methods for estimating the Region of Attraction (ROA): (i) a less conservative Lyapunov-based approach that constructs invariant sublevel sets of a quadratic function satisfying a linear matrix inequality (LMI), and (ii) a novel technique for computing tight local sector bounds for FFNNs via layer-wise propagation of linear relaxations. These bounds are integrated into the localized Aizerman framework to certify local exponential stability. Numerical results demonstrate substantial improvements over existing integral quadratic constraint-based approaches in both ROA size and scalability.
Related papers
- From Sublinear to Linear: Fast Convergence in Deep Networks via Locally Polyak-Lojasiewicz Regions [0.0]
This paper presents a theoretical challenge on the non-GD same loss of deep neural networks (DNNs)<n>We show that the gradient lower-bounds of the suboptimality gap prove that properly finite-width networks admit a squared gradient within an L.<n>Our work provides a theoretical explanation for the efficiency of the efficiency of deep learning.
arXiv Detail & Related papers (2025-07-29T01:49:16Z) - Robust Stability Analysis of Positive Lure System with Neural Network Feedback [0.0]
We consider a control system of Lur'e type in which not only the linear part includes parametric uncertainty but also the nonlinear sector bound is unknown.<n>By leveraging the positivity characteristic of the system, we derive an explicit formula for the stability radius of Lur'e systems.<n>This study introduces a scalable and efficient approach for robustness analysis of both Lur'e and NN-controlled systems.
arXiv Detail & Related papers (2025-05-25T00:37:28Z) - Stochastic Optimization with Optimal Importance Sampling [49.484190237840714]
We propose an iterative-based algorithm that jointly updates the decision and the IS distribution without requiring time-scale separation between the two.<n>Our method achieves the lowest possible variable variance and guarantees global convergence under convexity of the objective and mild assumptions on the IS distribution family.
arXiv Detail & Related papers (2025-04-04T16:10:18Z) - Ensuring Both Positivity and Stability Using Sector-Bounded Nonlinearity for Systems with Neural Network Controllers [0.0]
We present a stability theorem that demonstrates the global exponential stability of linear systems under fully connected FFNN control.
Our approach effectively addresses the challenge of ensuring stability in highly nonlinear systems.
We showcase the practical applicability of our methodology through its implementation in a linear system managed by a FFNN trained on output feedback controller data.
arXiv Detail & Related papers (2024-06-18T16:05:57Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - On the Local Quadratic Stability of T-S Fuzzy Systems in the Vicinity of
the Origin [7.191780076353627]
The main goal of this paper is to introduce new local stability conditions for continuous-time Takagi-Sugeno (T-S) fuzzy systems.
These stability conditions are based on linear matrix inequalities (LMIs) in combination with quadratic Lyapunov functions.
arXiv Detail & Related papers (2023-09-13T09:43:55Z) - Fully Stochastic Trust-Region Sequential Quadratic Programming for
Equality-Constrained Optimization Problems [62.83783246648714]
We propose a sequential quadratic programming algorithm (TR-StoSQP) to solve nonlinear optimization problems with objectives and deterministic equality constraints.
The algorithm adaptively selects the trust-region radius and, compared to the existing line-search StoSQP schemes, allows us to utilize indefinite Hessian matrices.
arXiv Detail & Related papers (2022-11-29T05:52:17Z) - Distributed Learning of Neural Lyapunov Functions for Large-Scale
Networked Dissipative Systems [3.483131882865931]
This paper considers the problem of characterizing the stability region of a large-scale networked system comprised of dissipative nonlinear subsystems.
We propose a new distributed learning based approach by exploiting the dissipativity structure of the subsystems.
arXiv Detail & Related papers (2022-07-15T20:03:53Z) - Beyond the Edge of Stability via Two-step Gradient Updates [49.03389279816152]
Gradient Descent (GD) is a powerful workhorse of modern machine learning.
GD's ability to find local minimisers is only guaranteed for losses with Lipschitz gradients.
This work focuses on simple, yet representative, learning problems via analysis of two-step gradient updates.
arXiv Detail & Related papers (2022-06-08T21:32:50Z) - KCRL: Krasovskii-Constrained Reinforcement Learning with Guaranteed
Stability in Nonlinear Dynamical Systems [66.9461097311667]
We propose a model-based reinforcement learning framework with formal stability guarantees.
The proposed method learns the system dynamics up to a confidence interval using feature representation.
We show that KCRL is guaranteed to learn a stabilizing policy in a finite number of interactions with the underlying unknown system.
arXiv Detail & Related papers (2022-06-03T17:27:04Z) - Convex Analysis of the Mean Field Langevin Dynamics [49.66486092259375]
convergence rate analysis of the mean field Langevin dynamics is presented.
$p_q$ associated with the dynamics allows us to develop a convergence theory parallel to classical results in convex optimization.
arXiv Detail & Related papers (2022-01-25T17:13:56Z) - Linear systems with neural network nonlinearities: Improved stability
analysis via acausal Zames-Falb multipliers [0.0]
We analyze the stability of feedback interconnections of a linear time-invariant system with a neural network nonlinearity in discrete time.
Our approach provides a flexible and versatile framework for stability analysis of feedback interconnections with neural network nonlinearities.
arXiv Detail & Related papers (2021-03-31T14:21:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.