On Recurrent Neural Networks for learning-based control: recent results
and ideas for future developments
- URL: http://arxiv.org/abs/2111.13557v1
- Date: Fri, 26 Nov 2021 15:52:52 GMT
- Title: On Recurrent Neural Networks for learning-based control: recent results
and ideas for future developments
- Authors: Fabio Bonassi, Marcello Farina, Jing Xie, Riccardo Scattolini
- Abstract summary: This paper aims to discuss and analyze the potentialities of Recurrent Neural Networks (RNN) in control design.
Main families of RNN are considered, namely Neural AutoRegressive eXo, NNARX, Echo State Networks (ESN), Long Short Term Memory (LSTM), and Gated Recurrent Units (GRU)
- Score: 1.1031750359996124
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper aims to discuss and analyze the potentialities of Recurrent Neural
Networks (RNN) in control design applications. The main families of RNN are
considered, namely Neural Nonlinear AutoRegressive eXogenous, (NNARX), Echo
State Networks (ESN), Long Short Term Memory (LSTM), and Gated Recurrent Units
(GRU). The goal is twofold. Firstly, to survey recent results concerning the
training of RNN that enjoy Input-to-State Stability (ISS) and Incremental
Input-to-State Stability ({\delta}ISS) guarantees. Secondly, to discuss the
issues that still hinder the widespread use of RNN for control, namely their
robustness, verifiability, and interpretability. The former properties are
related to the so-called generalization capabilities of the networks, i.e.
their consistency with the underlying real plants, even in presence of unseen
or perturbed input trajectories. The latter is instead related to the
possibility of providing a clear formal connection between the RNN model and
the plant. In this context, we illustrate how ISS and {\delta}ISS represent a
significant step towards the robustness and verifiability of the RNN models,
while the requirement of interpretability paves the way to the use of
physics-based networks. The design of model predictive controllers with RNN as
plant's model is also briefly discussed. Lastly, some of the main topics of the
paper are illustrated on a simulated chemical system.
Related papers
- Designing Robust Quantum Neural Networks: Exploring Expressibility, Entanglement, and Control Rotation Gate Selection for Enhanced Quantum Models [3.9554540293311864]
This study investigates the robustness of Quanvolutional Neural Networks (QuNNs) in comparison to their classical counterparts.
We develop a novel methodology that utilizes three quantum circuit metrics: expressibility, entanglement capability, and controlled rotation gate selection.
Our results demonstrate that QuNNs exhibit up to 60% greater robustness on the MNIST dataset and 40% on the Fashion-MNIST dataset compared to CNNs.
arXiv Detail & Related papers (2024-11-03T21:18:07Z) - Uncertainty in Graph Neural Networks: A Survey [50.63474656037679]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.
However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.
This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - Harnessing Neuron Stability to Improve DNN Verification [42.65507402735545]
We present VeriStable, a novel extension of recently proposed DPLL-based constraint DNN verification approach.
We evaluate the effectiveness of VeriStable across a range of challenging benchmarks including fully-connected feed networks (FNNs), convolutional neural networks (CNNs) and residual networks (ResNets)
Preliminary results show that VeriStable is competitive and outperforms state-of-the-art verification tools, including $alpha$-$beta$-CROWN and MN-BaB, the first and second performers of the VNN-COMP, respectively.
arXiv Detail & Related papers (2024-01-19T23:48:04Z) - Brain-Inspired Spiking Neural Networks for Industrial Fault Diagnosis: A Survey, Challenges, and Opportunities [10.371337760495521]
Spiking Neural Network (SNN) is founded on principles of Brain-inspired computing.
This paper systematically reviews the theoretical progress of SNN-based models to answer the question of what SNN is.
arXiv Detail & Related papers (2023-11-13T11:25:34Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Learning Low Dimensional State Spaces with Overparameterized Recurrent
Neural Nets [57.06026574261203]
We provide theoretical evidence for learning low-dimensional state spaces, which can also model long-term memory.
Experiments corroborate our theory, demonstrating extrapolation via learning low-dimensional state spaces with both linear and non-linear RNNs.
arXiv Detail & Related papers (2022-10-25T14:45:15Z) - Linear Leaky-Integrate-and-Fire Neuron Model Based Spiking Neural
Networks and Its Mapping Relationship to Deep Neural Networks [7.840247953745616]
Spiking neural networks (SNNs) are brain-inspired machine learning algorithms with merits such as biological plausibility and unsupervised learning capability.
This paper establishes a precise mathematical mapping between the biological parameters of the Linear Leaky-Integrate-and-Fire model (LIF)/SNNs and the parameters of ReLU-AN/Deep Neural Networks (DNNs)
arXiv Detail & Related papers (2022-05-31T17:02:26Z) - Coupled Oscillatory Recurrent Neural Network (coRNN): An accurate and
(gradient) stable architecture for learning long time dependencies [15.2292571922932]
We propose a novel architecture for recurrent neural networks.
Our proposed RNN is based on a time-discretization of a system of second-order ordinary differential equations.
Experiments show that the proposed RNN is comparable in performance to the state of the art on a variety of benchmarks.
arXiv Detail & Related papers (2020-10-02T12:35:04Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Stochastic Graph Neural Networks [123.39024384275054]
Graph neural networks (GNNs) model nonlinear representations in graph data with applications in distributed agent coordination, control, and planning.
Current GNN architectures assume ideal scenarios and ignore link fluctuations that occur due to environment, human factors, or external attacks.
In these situations, the GNN fails to address its distributed task if the topological randomness is not considered accordingly.
arXiv Detail & Related papers (2020-06-04T08:00:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.