Distributed Identification of Contracting and/or Monotone Network
Dynamics
- URL: http://arxiv.org/abs/2107.14309v1
- Date: Thu, 29 Jul 2021 20:15:02 GMT
- Title: Distributed Identification of Contracting and/or Monotone Network
Dynamics
- Authors: Max Revay, Jack Umenberger, Ian R. Manchester
- Abstract summary: This paper proposes methods for identification of large-scale networked systems with guarantees that the resulting model will be contracting.
The main challenges that we address are: simultaneously searching for model parameters and a certificate of stability, and scalability to networks with hundreds or thousands of nodes.
- Score: 8.057006406834466
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes methods for identification of large-scale networked
systems with guarantees that the resulting model will be contracting -- a
strong form of nonlinear stability -- and/or monotone, i.e. order relations
between states are preserved. The main challenges that we address are:
simultaneously searching for model parameters and a certificate of stability,
and scalability to networks with hundreds or thousands of nodes. We propose a
model set that admits convex constraints for stability and monotonicity, and
has a separable structure that allows distributed identification via the
alternating directions method of multipliers (ADMM). The performance and
scalability of the approach is illustrated on a variety of linear and
non-linear case studies, including a nonlinear traffic network with a
200-dimensional state space.
Related papers
- The Risk of Federated Learning to Skew Fine-Tuning Features and
Underperform Out-of-Distribution Robustness [50.52507648690234]
Federated learning has the risk of skewing fine-tuning features and compromising the robustness of the model.
We introduce three robustness indicators and conduct experiments across diverse robust datasets.
Our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-01-25T09:18:51Z) - Distributionally Robust Model-based Reinforcement Learning with Large
State Spaces [55.14361269378122]
Three major challenges in reinforcement learning are the complex dynamical systems with large state spaces, the costly data acquisition processes, and the deviation of real-world dynamics from the training environment deployment.
We study distributionally robust Markov decision processes with continuous state spaces under the widely used Kullback-Leibler, chi-square, and total variation uncertainty sets.
We propose a model-based approach that utilizes Gaussian Processes and the maximum variance reduction algorithm to efficiently learn multi-output nominal transition dynamics.
arXiv Detail & Related papers (2023-09-05T13:42:11Z) - Learning Stable Koopman Embeddings [9.239657838690228]
We present a new data-driven method for learning stable models of nonlinear systems.
We prove that every discrete-time nonlinear contracting model can be learnt in our framework.
arXiv Detail & Related papers (2021-10-13T05:44:13Z) - A purely data-driven framework for prediction, optimization, and control
of networked processes: application to networked SIS epidemic model [0.8287206589886881]
We develop a data-driven framework based on operator-theoretic techniques to identify and control nonlinear dynamics over large-scale networks.
The proposed approach requires no prior knowledge of the network structure and identifies the underlying dynamics solely using a collection of two-step snapshots of the states.
arXiv Detail & Related papers (2021-08-01T03:57:10Z) - Recurrent Equilibrium Networks: Flexible Dynamic Models with Guaranteed
Stability and Robustness [3.2872586139884623]
This paper introduces recurrent equilibrium networks (RENs) for applications in machine learning, system identification and control.
RENs are parameterized directly by quadratic vector in RN, i.e. stability and robustness are ensured without parameter constraints.
The paper also presents applications in data-driven nonlinear observer design and control with stability guarantees.
arXiv Detail & Related papers (2021-04-13T05:09:41Z) - GELATO: Geometrically Enriched Latent Model for Offline Reinforcement
Learning [54.291331971813364]
offline reinforcement learning approaches can be divided into proximal and uncertainty-aware methods.
In this work, we demonstrate the benefit of combining the two in a latent variational model.
Our proposed metrics measure both the quality of out of distribution samples as well as the discrepancy of examples in the data.
arXiv Detail & Related papers (2021-02-22T19:42:40Z) - Probabilistic Circuits for Variational Inference in Discrete Graphical
Models [101.28528515775842]
Inference in discrete graphical models with variational methods is difficult.
Many sampling-based methods have been proposed for estimating Evidence Lower Bound (ELBO)
We propose a new approach that leverages the tractability of probabilistic circuit models, such as Sum Product Networks (SPN)
We show that selective-SPNs are suitable as an expressive variational distribution, and prove that when the log-density of the target model is aweighted the corresponding ELBO can be computed analytically.
arXiv Detail & Related papers (2020-10-22T05:04:38Z) - Identification of Probability weighted ARX models with arbitrary domains [75.91002178647165]
PieceWise Affine models guarantees universal approximation, local linearity and equivalence to other classes of hybrid system.
In this work, we focus on the identification of PieceWise Auto Regressive with eXogenous input models with arbitrary regions (NPWARX)
The architecture is conceived following the Mixture of Expert concept, developed within the machine learning field.
arXiv Detail & Related papers (2020-09-29T12:50:33Z) - Monotone operator equilibrium networks [97.86610752856987]
We develop a new class of implicit-depth model based on the theory of monotone operators, the Monotone Operator Equilibrium Network (monDEQ)
We show the close connection between finding the equilibrium point of an implicit network and solving a form of monotone operator splitting problem.
We then develop a parameterization of the network which ensures that all operators remain monotone, which guarantees the existence of a unique equilibrium point.
arXiv Detail & Related papers (2020-06-15T17:57:31Z) - A Convex Parameterization of Robust Recurrent Neural Networks [3.2872586139884623]
Recurrent neural networks (RNNs) are a class of nonlinear dynamical systems often used to model sequence-to-sequence maps.
We formulate convex sets of RNNs with stability and robustness guarantees.
arXiv Detail & Related papers (2020-04-11T03:12:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.