Robust Stability of Neural-Network Controlled Nonlinear Systems with
Parametric Variability
- URL: http://arxiv.org/abs/2109.05710v1
- Date: Mon, 13 Sep 2021 05:09:30 GMT
- Title: Robust Stability of Neural-Network Controlled Nonlinear Systems with
Parametric Variability
- Authors: Soumyabrata Talukder, Ratnesh Kumar
- Abstract summary: We develop a theory for stability and stabilizability of a class of neural-network controlled nonlinear systems.
For computing such a robust stabilizing NN controller, a stability guaranteed training (SGT) is also proposed.
- Score: 2.0199917525888895
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Stability certification and identification of the stabilizable operating
region of a system are two important concerns to ensure its operational
safety/security and robustness. With the advent of machine-learning tools,
these issues are specially important for systems with machine-learned
components in the feedback loop. Here we develop a theory for stability and
stabilizability of a class of neural-network controlled nonlinear systems,
where the equilibria can drift when parametric changes occur. A Lyapunov based
convex stability certificate is developed and is further used to devise an
estimate for a local Lipschitz upper bound for a neural-network (NN) controller
and a corresponding operating domain on the state space, containing an
initialization set from where the closed-loop (CL) local asymptotic stability
of each system in the class is guaranteed under the same controller, while the
system trajectories remain confined to the operating domain. For computing such
a robust stabilizing NN controller, a stability guaranteed training (SGT)
algorithm is also proposed. The effectiveness of the proposed framework is
demonstrated using illustrative examples.
Related papers
- Ensuring Both Positivity and Stability Using Sector-Bounded Nonlinearity for Systems with Neural Network Controllers [0.0]
We present a stability theorem that demonstrates the global exponential stability of linear systems under fully connected FFNN control.
Our approach effectively addresses the challenge of ensuring stability in highly nonlinear systems.
We showcase the practical applicability of our methodology through its implementation in a linear system managed by a FFNN trained on output feedback controller data.
arXiv Detail & Related papers (2024-06-18T16:05:57Z) - Learning to Boost the Performance of Stable Nonlinear Systems [0.0]
We tackle the performance-boosting problem with closed-loop stability guarantees.
Our methods enable learning over arbitrarily deep neural network classes of performance-boosting controllers for stable nonlinear systems.
arXiv Detail & Related papers (2024-05-01T21:11:29Z) - Lyapunov-stable Neural Control for State and Output Feedback: A Novel Formulation [67.63756749551924]
Learning-based neural network (NN) control policies have shown impressive empirical performance in a wide range of tasks in robotics and control.
Lyapunov stability guarantees over the region-of-attraction (ROA) for NN controllers with nonlinear dynamical systems are challenging to obtain.
We demonstrate a new framework for learning NN controllers together with Lyapunov certificates using fast empirical falsification and strategic regularizations.
arXiv Detail & Related papers (2024-04-11T17:49:15Z) - KCRL: Krasovskii-Constrained Reinforcement Learning with Guaranteed
Stability in Nonlinear Dynamical Systems [66.9461097311667]
We propose a model-based reinforcement learning framework with formal stability guarantees.
The proposed method learns the system dynamics up to a confidence interval using feature representation.
We show that KCRL is guaranteed to learn a stabilizing policy in a finite number of interactions with the underlying unknown system.
arXiv Detail & Related papers (2022-06-03T17:27:04Z) - Stability Verification in Stochastic Control Systems via Neural Network
Supermartingales [17.558766911646263]
We present an approach for general nonlinear control problems with two novel aspects.
We use ranking supergales (RSMs) to certify a.s.asymptotic stability, and we present a method for learning neural networks.
arXiv Detail & Related papers (2021-12-17T13:05:14Z) - Recurrent Neural Network Controllers Synthesis with Stability Guarantees
for Partially Observed Systems [6.234005265019845]
We consider the important class of recurrent neural networks (RNN) as dynamic controllers for nonlinear uncertain partially-observed systems.
We propose a projected policy gradient method that iteratively enforces the stability conditions in the reparametrized space.
Numerical experiments show that our method learns stabilizing controllers while using fewer samples and achieving higher final performance compared with policy gradient.
arXiv Detail & Related papers (2021-09-08T18:21:56Z) - Pointwise Feasibility of Gaussian Process-based Safety-Critical Control
under Model Uncertainty [77.18483084440182]
Control Barrier Functions (CBFs) and Control Lyapunov Functions (CLFs) are popular tools for enforcing safety and stability of a controlled system, respectively.
We present a Gaussian Process (GP)-based approach to tackle the problem of model uncertainty in safety-critical controllers that use CBFs and CLFs.
arXiv Detail & Related papers (2021-06-13T23:08:49Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z) - Learning Stabilizing Controllers for Unstable Linear Quadratic
Regulators from a Single Trajectory [85.29718245299341]
We study linear controllers under quadratic costs model also known as linear quadratic regulators (LQR)
We present two different semi-definite programs (SDP) which results in a controller that stabilizes all systems within an ellipsoid uncertainty set.
We propose an efficient data dependent algorithm -- textsceXploration -- that with high probability quickly identifies a stabilizing controller.
arXiv Detail & Related papers (2020-06-19T08:58:57Z) - Actor-Critic Reinforcement Learning for Control with Stability Guarantee [9.400585561458712]
Reinforcement Learning (RL) and its integration with deep learning have achieved impressive performance in various robotic control tasks.
However, stability is not guaranteed in model-free RL by solely using data.
We propose an actor-critic RL framework for control which can guarantee closed-loop stability by employing the classic Lyapunov's method in control theory.
arXiv Detail & Related papers (2020-04-29T16:14:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.