Distributed Learning of Neural Lyapunov Functions for Large-Scale
Networked Dissipative Systems
- URL: http://arxiv.org/abs/2207.07731v1
- Date: Fri, 15 Jul 2022 20:03:53 GMT
- Title: Distributed Learning of Neural Lyapunov Functions for Large-Scale
Networked Dissipative Systems
- Authors: Amit Jena, Tong Huang, S. Sivaranjani, Dileep Kalathil, Le Xie
- Abstract summary: This paper considers the problem of characterizing the stability region of a large-scale networked system comprised of dissipative nonlinear subsystems.
We propose a new distributed learning based approach by exploiting the dissipativity structure of the subsystems.
- Score: 3.483131882865931
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper considers the problem of characterizing the stability region of a
large-scale networked system comprised of dissipative nonlinear subsystems, in
a distributed and computationally tractable way. One standard approach to
estimate the stability region of a general nonlinear system is to first find a
Lyapunov function for the system and characterize its region of attraction as
the stability region. However, classical approaches, such as sum-of-squares
methods and quadratic approximation, for finding a Lyapunov function either do
not scale to large systems or give very conservative estimates for the
stability region. In this context, we propose a new distributed learning based
approach by exploiting the dissipativity structure of the subsystems. Our
approach has two parts: the first part is a distributed approach to learn the
storage functions (similar to the Lyapunov functions) for all the subsystems,
and the second part is a distributed optimization approach to find the Lyapunov
function for the networked system using the learned storage functions of the
subsystems. We demonstrate the superior performance of our proposed approach
through extensive case studies in microgrid networks.
Related papers
- Local Stability and Region of Attraction Analysis for Neural Network Feedback Systems under Positivity Constraints [0.0]
We study the local stability of nonlinear systems in the Lur'e form with static nonlinear feedback realized by feedforward neural networks (FFNNs)<n>By leveraging positivity system constraints, we employ a localized variant of the Aizerman conjecture, which provides sufficient conditions for exponential stability of trajectories confined to a compact set.
arXiv Detail & Related papers (2025-05-28T21:45:49Z) - Learning Controlled Stochastic Differential Equations [61.82896036131116]
This work proposes a novel method for estimating both drift and diffusion coefficients of continuous, multidimensional, nonlinear controlled differential equations with non-uniform diffusion.
We provide strong theoretical guarantees, including finite-sample bounds for (L2), (Linfty), and risk metrics, with learning rates adaptive to coefficients' regularity.
Our method is available as an open-source Python library.
arXiv Detail & Related papers (2024-11-04T11:09:58Z) - Stability analysis of chaotic systems in latent spaces [4.266376725904727]
We show that a latent-space approach can infer the solution of a chaotic partial differential equation.
It can also predict the stability properties of the physical system.
arXiv Detail & Related papers (2024-10-01T08:09:14Z) - Neural Lyapunov Control for Discrete-Time Systems [30.135651803114307]
A general approach is to compute a combination of a Lyapunov function and an associated control policy.
Several methods have been proposed that represent Lyapunov functions using neural networks.
We propose the first approach for learning neural Lyapunov control in a broad class of discrete-time systems.
arXiv Detail & Related papers (2023-05-11T03:28:20Z) - Interval Reachability of Nonlinear Dynamical Systems with Neural Network
Controllers [5.543220407902113]
This paper proposes a computationally efficient framework, based on interval analysis, for rigorous verification of nonlinear continuous-time dynamical systems with neural network controllers.
Inspired by mixed monotone theory, we embed the closed-loop dynamics into a larger system using an inclusion function of the neural network and a decomposition function of the open-loop system.
We show that one can efficiently compute hyper-rectangular over-approximations of the reachable sets using a single trajectory of the embedding system.
arXiv Detail & Related papers (2023-01-19T06:46:36Z) - Neural Lyapunov Control of Unknown Nonlinear Systems with Stability
Guarantees [4.786698731084036]
We propose a learning framework to stabilize an unknown nonlinear system with a neural controller and learn a neural Lyapunov function.
We provide theoretical guarantees of the proposed learning framework in terms of the closed-loop stability for the unknown nonlinear system.
arXiv Detail & Related papers (2022-06-04T05:57:31Z) - Reinforcement Learning from Partial Observation: Linear Function Approximation with Provable Sample Efficiency [111.83670279016599]
We study reinforcement learning for partially observed decision processes (POMDPs) with infinite observation and state spaces.
We make the first attempt at partial observability and function approximation for a class of POMDPs with a linear structure.
arXiv Detail & Related papers (2022-04-20T21:15:38Z) - Stability Verification in Stochastic Control Systems via Neural Network
Supermartingales [17.558766911646263]
We present an approach for general nonlinear control problems with two novel aspects.
We use ranking supergales (RSMs) to certify a.s.asymptotic stability, and we present a method for learning neural networks.
arXiv Detail & Related papers (2021-12-17T13:05:14Z) - Supervised DKRC with Images for Offline System Identification [77.34726150561087]
Modern dynamical systems are becoming increasingly non-linear and complex.
There is a need for a framework to model these systems in a compact and comprehensive representation for prediction and control.
Our approach learns these basis functions using a supervised learning approach.
arXiv Detail & Related papers (2021-09-06T04:39:06Z) - Resource Allocation via Model-Free Deep Learning in Free Space Optical
Communications [119.81868223344173]
The paper investigates the general problem of resource allocation for mitigating channel fading effects in Free Space Optical (FSO) communications.
Under this framework, we propose two algorithms that solve FSO resource allocation problems.
arXiv Detail & Related papers (2020-07-27T17:38:51Z) - A Multi-Agent Primal-Dual Strategy for Composite Optimization over
Distributed Features [52.856801164425086]
We study multi-agent sharing optimization problems with the objective function being the sum of smooth local functions plus a convex (possibly non-smooth) coupling function.
arXiv Detail & Related papers (2020-06-15T19:40:24Z) - On dissipative symplectic integration with applications to
gradient-based optimization [77.34726150561087]
We propose a geometric framework in which discretizations can be realized systematically.
We show that a generalization of symplectic to nonconservative and in particular dissipative Hamiltonian systems is able to preserve rates of convergence up to a controlled error.
arXiv Detail & Related papers (2020-04-15T00:36:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.