Nonlinear MPC for Offset-Free Tracking of systems learned by GRU Neural
Networks
- URL: http://arxiv.org/abs/2103.02383v1
- Date: Wed, 3 Mar 2021 13:14:33 GMT
- Title: Nonlinear MPC for Offset-Free Tracking of systems learned by GRU Neural
Networks
- Authors: Fabio Bonassi, Caio Fabio Oliveira da Silva, Riccardo Scattolini
- Abstract summary: This paper describes how stable Gated Recurrent Units (GRUs) can be trained and employed in a MPC framework to perform offset-free tracking of constant references with guaranteed closed-loop stability.
The proposed approach is tested on a pH neutralization process benchmark, showing remarkable performances.
- Score: 0.2578242050187029
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The use of Recurrent Neural Networks (RNNs) for system identification has
recently gathered increasing attention, thanks to their black-box modeling
capabilities.Albeit RNNs have been fruitfully adopted in many applications,
only few works are devoted to provide rigorous theoretical foundations that
justify their use for control purposes. The aim of this paper is to describe
how stable Gated Recurrent Units (GRUs), a particular RNN architecture, can be
trained and employed in a Nonlinear MPC framework to perform offset-free
tracking of constant references with guaranteed closed-loop stability. The
proposed approach is tested on a pH neutralization process benchmark, showing
remarkable performances.
Related papers
- Lyapunov-stable Neural Control for State and Output Feedback: A Novel Formulation [67.63756749551924]
Learning-based neural network (NN) control policies have shown impressive empirical performance in a wide range of tasks in robotics and control.
Lyapunov stability guarantees over the region-of-attraction (ROA) for NN controllers with nonlinear dynamical systems are challenging to obtain.
We demonstrate a new framework for learning NN controllers together with Lyapunov certificates using fast empirical falsification and strategic regularizations.
arXiv Detail & Related papers (2024-04-11T17:49:15Z) - Nonlinear MPC design for incrementally ISS systems with application to
GRU networks [0.0]
This brief addresses the design of a Model Predictive Control (NMPC) strategy for exponentially incremental Input-to-State Stable (ISS) systems.
The designed methodology is particularly suited for the control of systems learned by Recurrent Neural Networks (RNNs)
The approach is applied to Gated Recurrent Unit (GRU) networks, providing also a method for the design of a tailored state observer with convergence guarantees.
arXiv Detail & Related papers (2023-09-28T13:26:20Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Backward Reachability Analysis of Neural Feedback Loops: Techniques for
Linear and Nonlinear Systems [59.57462129637796]
This paper presents a backward reachability approach for safety verification of closed-loop systems with neural networks (NNs)
The presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible.
We present frameworks for calculating BP over-approximations for both linear and nonlinear systems with control policies represented by feedforward NNs.
arXiv Detail & Related papers (2022-09-28T13:17:28Z) - Distributed neural network control with dependability guarantees: a
compositional port-Hamiltonian approach [0.0]
Large-scale cyber-physical systems require that control policies are distributed, that is, that they only rely on local real-time measurements and communication with neighboring agents.
Recent work has proposed training Neural Network (NN) distributed controllers.
A main challenge of NN controllers is that they are not dependable during and after training, that is, the closed-loop system may be unstable, and the training may fail due to vanishing and exploding gradients.
arXiv Detail & Related papers (2021-12-16T17:37:11Z) - Interpretable Design of Reservoir Computing Networks using Realization
Theory [5.607676459156789]
Reservoir computing networks (RCNs) have been successfully employed as a tool in learning and complex decision-making tasks.
We develop an algorithm to design RCNs using the realization theory of linear dynamical systems.
arXiv Detail & Related papers (2021-12-13T18:49:29Z) - Neural network optimal feedback control with enhanced closed loop
stability [3.0981875303080795]
Recent research has shown that supervised learning can be an effective tool for designing optimal feedback controllers for high-dimensional nonlinear dynamic systems.
But the behavior of these neural network (NN) controllers is still not well understood.
In this paper we use numerical simulations to demonstrate that typical test accuracy metrics do not effectively capture the ability of an NN controller to stabilize a system.
arXiv Detail & Related papers (2021-09-15T17:59:20Z) - Recurrent neural network-based Internal Model Control of unknown
nonlinear stable systems [0.30458514384586394]
Gated Recurrent Neural Networks (RNNs) have become popular tools for learning dynamical systems.
This paper aims to discuss how these networks can be adopted for the synthesis of Internal Model Control (IMC) architectures.
arXiv Detail & Related papers (2021-08-10T11:02:25Z) - Kernel-Based Smoothness Analysis of Residual Networks [85.20737467304994]
Residual networks (ResNets) stand out among these powerful modern architectures.
In this paper, we show another distinction between the two models, namely, a tendency of ResNets to promote smoothers than gradients.
arXiv Detail & Related papers (2020-09-21T16:32:04Z) - Continual Learning in Recurrent Neural Networks [67.05499844830231]
We evaluate the effectiveness of continual learning methods for processing sequential data with recurrent neural networks (RNNs)
We shed light on the particularities that arise when applying weight-importance methods, such as elastic weight consolidation, to RNNs.
We show that the performance of weight-importance methods is not directly affected by the length of the processed sequences, but rather by high working memory requirements.
arXiv Detail & Related papers (2020-06-22T10:05:12Z) - Lipschitz Recurrent Neural Networks [100.72827570987992]
We show that our Lipschitz recurrent unit is more robust with respect to input and parameter perturbations as compared to other continuous-time RNNs.
Our experiments demonstrate that the Lipschitz RNN can outperform existing recurrent units on a range of benchmark tasks.
arXiv Detail & Related papers (2020-06-22T08:44:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.