Robust Recurrent Neural Network to Identify Ship Motion in Open Water
with Performance Guarantees -- Technical Report
- URL: http://arxiv.org/abs/2212.05781v1
- Date: Mon, 12 Dec 2022 09:07:37 GMT
- Title: Robust Recurrent Neural Network to Identify Ship Motion in Open Water
with Performance Guarantees -- Technical Report
- Authors: Daniel Frank, Decky Aspandi Latif, Michael Muehlebach, Steffen Staab
- Abstract summary: Recurrent neural networks are capable of learning the dynamics of an unknown nonlinear system purely from input-output measurements.
In this work, we represent a recurrent neural network as a linear time-invariant system with nonlinear disturbances.
- Score: 8.441687388985162
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recurrent neural networks are capable of learning the dynamics of an unknown
nonlinear system purely from input-output measurements. However, the resulting
models do not provide any stability guarantees on the input-output mapping. In
this work, we represent a recurrent neural network as a linear time-invariant
system with nonlinear disturbances. By introducing constraints on the
parameters, we can guarantee finite gain stability and incremental finite gain
stability. We apply this identification method to learn the motion of a
four-degrees-of-freedom ship that is moving in open water and compare it
against other purely learning-based approaches with unconstrained parameters.
Our analysis shows that the constrained recurrent neural network has a lower
prediction accuracy on the test set, but it achieves comparable results on an
out-of-distribution set and respects stability conditions.
Related papers
- Regulating Model Reliance on Non-Robust Features by Smoothing Input Marginal Density [93.32594873253534]
Trustworthy machine learning requires meticulous regulation of model reliance on non-robust features.
We propose a framework to delineate and regulate such features by attributing model predictions to the input.
arXiv Detail & Related papers (2024-07-05T09:16:56Z) - Synthesizing Neural Network Controllers with Closed-Loop Dissipativity Guarantees [0.6612847014373572]
A class of plants is considered that of linear time-invariant (LTI) systems interconnected with an uncertainty.
The uncertainty of the plant and the nonlinearities of the neural network are both described using integral quadratic constraints.
A convex condition is used in a projection-based training method to synthesize neural network controllers with dissipativity guarantees.
arXiv Detail & Related papers (2024-04-10T22:15:28Z) - Neural Abstractions [72.42530499990028]
We present a novel method for the safety verification of nonlinear dynamical models that uses neural networks to represent abstractions of their dynamics.
We demonstrate that our approach performs comparably to the mature tool Flow* on existing benchmark nonlinear models.
arXiv Detail & Related papers (2023-01-27T12:38:09Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - A Priori Denoising Strategies for Sparse Identification of Nonlinear
Dynamical Systems: A Comparative Study [68.8204255655161]
We investigate and compare the performance of several local and global smoothing techniques to a priori denoise the state measurements.
We show that, in general, global methods, which use the entire measurement data set, outperform local methods, which employ a neighboring data subset around a local point.
arXiv Detail & Related papers (2022-01-29T23:31:25Z) - Stability Verification in Stochastic Control Systems via Neural Network
Supermartingales [17.558766911646263]
We present an approach for general nonlinear control problems with two novel aspects.
We use ranking supergales (RSMs) to certify a.s.asymptotic stability, and we present a method for learning neural networks.
arXiv Detail & Related papers (2021-12-17T13:05:14Z) - A Theoretical Overview of Neural Contraction Metrics for Learning-based
Control with Guaranteed Stability [7.963506386866862]
This paper presents a neural network model of an optimal contraction metric and corresponding differential Lyapunov function.
Its innovation lies in providing formal robustness guarantees for learning-based control frameworks.
arXiv Detail & Related papers (2021-10-02T00:28:49Z) - Concurrent Learning Based Tracking Control of Nonlinear Systems using
Gaussian Process [2.7930955543692817]
This paper demonstrates the applicability of the combination of concurrent learning as a tool for parameter estimation and non-parametric Gaussian Process for online disturbance learning.
A control law is developed by using both techniques sequentially in the context of feedback linearization.
The closed-loop system stability for the nth-order system is proven using the Lyapunov stability theorem.
arXiv Detail & Related papers (2021-06-02T02:59:48Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z) - Linear systems with neural network nonlinearities: Improved stability
analysis via acausal Zames-Falb multipliers [0.0]
We analyze the stability of feedback interconnections of a linear time-invariant system with a neural network nonlinearity in discrete time.
Our approach provides a flexible and versatile framework for stability analysis of feedback interconnections with neural network nonlinearities.
arXiv Detail & Related papers (2021-03-31T14:21:03Z) - Lipschitz Recurrent Neural Networks [100.72827570987992]
We show that our Lipschitz recurrent unit is more robust with respect to input and parameter perturbations as compared to other continuous-time RNNs.
Our experiments demonstrate that the Lipschitz RNN can outperform existing recurrent units on a range of benchmark tasks.
arXiv Detail & Related papers (2020-06-22T08:44:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.