Recurrent neural network-based Internal Model Control of unknown
nonlinear stable systems
- URL: http://arxiv.org/abs/2108.04585v1
- Date: Tue, 10 Aug 2021 11:02:25 GMT
- Title: Recurrent neural network-based Internal Model Control of unknown
nonlinear stable systems
- Authors: Fabio Bonassi, Riccardo Scattolini
- Abstract summary: Gated Recurrent Neural Networks (RNNs) have become popular tools for learning dynamical systems.
This paper aims to discuss how these networks can be adopted for the synthesis of Internal Model Control (IMC) architectures.
- Score: 0.30458514384586394
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Owing to their superior modeling capabilities, gated Recurrent Neural
Networks (RNNs), such as Gated Recurrent Units (GRUs) and Long Short-Term
Memory networks (LSTMs), have become popular tools for learning dynamical
systems. This paper aims to discuss how these networks can be adopted for the
synthesis of Internal Model Control (IMC) architectures. To this end, a first
gated RNN is used to learn a model of the unknown input-output stable plant.
Then, another gated RNN approximating the model inverse is trained. The
proposed scheme is able to cope with the saturation of the control variables,
and it can be deployed on low-power embedded controllers since it does not
require any online computation. The approach is then tested on the Quadruple
Tank benchmark system, resulting in satisfactory closed-loop performances.
Related papers
- Self-Organizing Recurrent Stochastic Configuration Networks for Nonstationary Data Modelling [3.8719670789415925]
Recurrent configuration networks (RSCNs) are a class of randomized models that have shown promise in modelling nonlinear dynamics.
This paper aims at developing a self-organizing version of RSCNs, termed as SORSCNs, to enhance the continuous learning ability of the network for modelling nonstationary data.
arXiv Detail & Related papers (2024-10-14T01:28:25Z) - Auto-Train-Once: Controller Network Guided Automatic Network Pruning from Scratch [72.26822499434446]
Auto-Train-Once (ATO) is an innovative network pruning algorithm designed to automatically reduce the computational and storage costs of DNNs.
We provide a comprehensive convergence analysis as well as extensive experiments, and the results show that our approach achieves state-of-the-art performance across various model architectures.
arXiv Detail & Related papers (2024-03-21T02:33:37Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Brain-Inspired Spiking Neural Network for Online Unsupervised Time
Series Prediction [13.521272923545409]
We present a novel Continuous Learning-based Unsupervised Recurrent Spiking Neural Network Model (CLURSNN)
CLURSNN makes online predictions by reconstructing the underlying dynamical system using Random Delay Embedding.
We show that the proposed online time series prediction methodology outperforms state-of-the-art DNN models when predicting an evolving Lorenz63 dynamical system.
arXiv Detail & Related papers (2023-04-10T16:18:37Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - A critical look at deep neural network for dynamic system modeling [0.0]
This paper questions the capability of (deep) neural networks for the modeling of dynamic systems using input-output data.
For the identification of linear time-invariant (LTI) dynamic systems, two representative neural network models are compared.
For the LTI system, both LSTM and CFNN fail to deliver consistent models even in noise-free cases.
arXiv Detail & Related papers (2023-01-27T09:03:05Z) - Model-Based Safe Policy Search from Signal Temporal Logic Specifications
Using Recurrent Neural Networks [1.005130974691351]
We propose a policy search approach to learn controllers from specifications given as Signal Temporal Logic (STL) formulae.
The system model is unknown, and it is learned together with the control policy.
The results show that our approach can satisfy the given specification within very few system runs, and therefore it has the potential to be used for on-line control.
arXiv Detail & Related papers (2021-03-29T20:21:55Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Lipschitz Recurrent Neural Networks [100.72827570987992]
We show that our Lipschitz recurrent unit is more robust with respect to input and parameter perturbations as compared to other continuous-time RNNs.
Our experiments demonstrate that the Lipschitz RNN can outperform existing recurrent units on a range of benchmark tasks.
arXiv Detail & Related papers (2020-06-22T08:44:52Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.