Learning Stable and Robust Linear Parameter-Varying State-Space Models
- URL: http://arxiv.org/abs/2304.01828v2
- Date: Tue, 26 Sep 2023 15:36:44 GMT
- Title: Learning Stable and Robust Linear Parameter-Varying State-Space Models
- Authors: Chris Verhoek and Ruigang Wang and Roland T\'oth
- Abstract summary: This paper presents two direct parameterizations of stable and robust linear parameter-varying state-space (LPV-SS) models.
Since the parametrizations are direct, the models can be trained using unconstrained optimization.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper presents two direct parameterizations of stable and robust linear
parameter-varying state-space (LPV-SS) models. The model parametrizations
guarantee a priori that for all parameter values during training, the allowed
models are stable in the contraction sense or have their Lipschitz constant
bounded by a user-defined value $\gamma$. Furthermore, since the
parametrizations are direct, the models can be trained using unconstrained
optimization. The fact that the trained models are of the LPV-SS class makes
them useful for, e.g., further convex analysis or controller design. The
effectiveness of the approach is demonstrated on an LPV identification problem.
Related papers
- An Iterative Bayesian Approach for System Identification based on Linear Gaussian Models [86.05414211113627]
We tackle the problem of system identification, where we select inputs, observe the corresponding outputs from the true system, and optimize the parameters of our model to best fit the data.
We propose a flexible and computationally tractable methodology that is compatible with any system and parametric family of models.
arXiv Detail & Related papers (2025-01-28T01:57:51Z) - SaRA: High-Efficient Diffusion Model Fine-tuning with Progressive Sparse Low-Rank Adaptation [52.6922833948127]
In this work, we investigate the importance of parameters in pre-trained diffusion models.
We propose a novel model fine-tuning method to make full use of these ineffective parameters.
Our method enhances the generative capabilities of pre-trained models in downstream applications.
arXiv Detail & Related papers (2024-09-10T16:44:47Z) - SMILE: Zero-Shot Sparse Mixture of Low-Rank Experts Construction From Pre-Trained Foundation Models [85.67096251281191]
We present an innovative approach to model fusion called zero-shot Sparse MIxture of Low-rank Experts (SMILE) construction.
SMILE allows for the upscaling of source models into an MoE model without extra data or further training.
We conduct extensive experiments across diverse scenarios, such as image classification and text generation tasks, using full fine-tuning and LoRA fine-tuning.
arXiv Detail & Related papers (2024-08-19T17:32:15Z) - Scaling Exponents Across Parameterizations and Optimizers [94.54718325264218]
We propose a new perspective on parameterization by investigating a key assumption in prior work.
Our empirical investigation includes tens of thousands of models trained with all combinations of threes.
We find that the best learning rate scaling prescription would often have been excluded by the assumptions in prior work.
arXiv Detail & Related papers (2024-07-08T12:32:51Z) - Stabilizing reinforcement learning control: A modular framework for optimizing over all stable behavior [2.4641488282873225]
We propose a framework for the design of feedback controllers that combines the optimization-driven and model-free advantages of deep reinforcement learning with the stability guarantees.
Recent advances in behavioral systems allow us to construct a data-driven internal model.
We analyze the stability of such data-driven models in the presence of noise.
arXiv Detail & Related papers (2023-10-21T19:32:11Z) - Active-Learning-Driven Surrogate Modeling for Efficient Simulation of
Parametric Nonlinear Systems [0.0]
In absence of governing equations, we need to construct the parametric reduced-order surrogate model in a non-intrusive fashion.
Our work provides a non-intrusive optimality criterion to efficiently populate the parameter snapshots.
We propose an active-learning-driven surrogate model using kernel-based shallow neural networks.
arXiv Detail & Related papers (2023-06-09T18:01:14Z) - On the Effectiveness of Parameter-Efficient Fine-Tuning [79.6302606855302]
Currently, many research works propose to only fine-tune a small portion of the parameters while keeping most of the parameters shared across different tasks.
We show that all of the methods are actually sparse fine-tuned models and conduct a novel theoretical analysis of them.
Despite the effectiveness of sparsity grounded by our theory, it still remains an open problem of how to choose the tunable parameters.
arXiv Detail & Related papers (2022-11-28T17:41:48Z) - Scaling & Shifting Your Features: A New Baseline for Efficient Model
Tuning [126.84770886628833]
Existing finetuning methods either tune all parameters of the pretrained model (full finetuning) or only tune the last linear layer (linear probing)
We propose a new parameter-efficient finetuning method termed as SSF, representing that researchers only need to Scale and Shift the deep Features extracted by a pre-trained model to catch up with the performance full finetuning.
arXiv Detail & Related papers (2022-10-17T08:14:49Z) - Learning Stable Koopman Embeddings [9.239657838690228]
We present a new data-driven method for learning stable models of nonlinear systems.
We prove that every discrete-time nonlinear contracting model can be learnt in our framework.
arXiv Detail & Related papers (2021-10-13T05:44:13Z) - Recurrent Equilibrium Networks: Flexible Dynamic Models with Guaranteed
Stability and Robustness [3.2872586139884623]
This paper introduces recurrent equilibrium networks (RENs) for applications in machine learning, system identification and control.
RENs are parameterized directly by quadratic vector in RN, i.e. stability and robustness are ensured without parameter constraints.
The paper also presents applications in data-driven nonlinear observer design and control with stability guarantees.
arXiv Detail & Related papers (2021-04-13T05:09:41Z) - Tracking Performance of Online Stochastic Learners [57.14673504239551]
Online algorithms are popular in large-scale learning settings due to their ability to compute updates on the fly, without the need to store and process data in large batches.
When a constant step-size is used, these algorithms also have the ability to adapt to drifts in problem parameters, such as data or model properties, and track the optimal solution with reasonable accuracy.
We establish a link between steady-state performance derived under stationarity assumptions and the tracking performance of online learners under random walk models.
arXiv Detail & Related papers (2020-04-04T14:16:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.