Offset-free setpoint tracking using neural network controllers
- URL: http://arxiv.org/abs/2011.14006v2
- Date: Thu, 29 Apr 2021 17:10:14 GMT
- Title: Offset-free setpoint tracking using neural network controllers
- Authors: Patricia Pauli, Johannes K\"ohler, Julian Berberich, Anne Koch and
Frank Allg\"ower
- Abstract summary: We present a method to analyze local and global stability in offset-free setpoint tracking using neural network controllers.
We provide ellipsoidal inner approximations of the corresponding region of attraction.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present a method to analyze local and global stability in
offset-free setpoint tracking using neural network controllers and we provide
ellipsoidal inner approximations of the corresponding region of attraction. We
consider a feedback interconnection of a linear plant in connection with a
neural network controller and an integrator, which allows for offset-free
tracking of a desired piecewise constant reference that enters the controller
as an external input. Exploiting the fact that activation functions used in
neural networks are slope-restricted, we derive linear matrix inequalities to
verify stability using Lyapunov theory. After stating a global stability
result, we present less conservative local stability conditions (i) for a given
reference and (ii) for any reference from a certain set. The latter result even
enables guaranteed tracking under setpoint changes using a reference governor
which can lead to a significant increase of the region of attraction. Finally,
we demonstrate the applicability of our analysis by verifying stability and
offset-free tracking of a neural network controller that was trained to
stabilize a linearized inverted pendulum.
Related papers
- Lyapunov-stable Neural Control for State and Output Feedback: A Novel Formulation [67.63756749551924]
Learning-based neural network (NN) control policies have shown impressive empirical performance in a wide range of tasks in robotics and control.
Lyapunov stability guarantees over the region-of-attraction (ROA) for NN controllers with nonlinear dynamical systems are challenging to obtain.
We demonstrate a new framework for learning NN controllers together with Lyapunov certificates using fast empirical falsification and strategic regularizations.
arXiv Detail & Related papers (2024-04-11T17:49:15Z) - Structured Deep Neural Network-Based Backstepping Trajectory Tracking Control for Lagrangian Systems [9.61674297336072]
The proposed controller can ensure closed-loop stability for any compatible neural network parameters.
We show that in the presence of model approximation errors and external disturbances, the closed-loop stability and tracking control performance can still be guaranteed.
arXiv Detail & Related papers (2024-03-01T09:09:37Z) - Training Certifiably Robust Neural Networks with Efficient Local
Lipschitz Bounds [99.23098204458336]
Certified robustness is a desirable property for deep neural networks in safety-critical applications.
We show that our method consistently outperforms state-of-the-art methods on MNIST and TinyNet datasets.
arXiv Detail & Related papers (2021-11-02T06:44:10Z) - A Theoretical Overview of Neural Contraction Metrics for Learning-based
Control with Guaranteed Stability [7.963506386866862]
This paper presents a neural network model of an optimal contraction metric and corresponding differential Lyapunov function.
Its innovation lies in providing formal robustness guarantees for learning-based control frameworks.
arXiv Detail & Related papers (2021-10-02T00:28:49Z) - Neural network optimal feedback control with enhanced closed loop
stability [3.0981875303080795]
Recent research has shown that supervised learning can be an effective tool for designing optimal feedback controllers for high-dimensional nonlinear dynamic systems.
But the behavior of these neural network (NN) controllers is still not well understood.
In this paper we use numerical simulations to demonstrate that typical test accuracy metrics do not effectively capture the ability of an NN controller to stabilize a system.
arXiv Detail & Related papers (2021-09-15T17:59:20Z) - Robust Stability of Neural-Network Controlled Nonlinear Systems with
Parametric Variability [2.0199917525888895]
We develop a theory for stability and stabilizability of a class of neural-network controlled nonlinear systems.
For computing such a robust stabilizing NN controller, a stability guaranteed training (SGT) is also proposed.
arXiv Detail & Related papers (2021-09-13T05:09:30Z) - Certifying Incremental Quadratic Constraints for Neural Networks via
Convex Optimization [2.388501293246858]
We propose a convex program to certify incremental quadratic constraints on the map of neural networks over a region of interest.
certificates can capture several useful properties such as (local) Lipschitz continuity, one-sided Lipschitz continuity, invertibility, and contraction.
arXiv Detail & Related papers (2020-12-10T21:15:00Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z) - Transition control of a tail-sitter UAV using recurrent neural networks [80.91076033926224]
The control strategy is based on attitude and velocity stabilization.
The RNN is used for the estimation of high nonlinear aerodynamic terms.
Results show convergence of linear velocities and the pitch angle during the transition maneuver.
arXiv Detail & Related papers (2020-06-29T21:33:30Z) - Lipschitz Recurrent Neural Networks [100.72827570987992]
We show that our Lipschitz recurrent unit is more robust with respect to input and parameter perturbations as compared to other continuous-time RNNs.
Our experiments demonstrate that the Lipschitz RNN can outperform existing recurrent units on a range of benchmark tasks.
arXiv Detail & Related papers (2020-06-22T08:44:52Z) - Neural Control Variates [71.42768823631918]
We show that a set of neural networks can face the challenge of finding a good approximation of the integrand.
We derive a theoretically optimal, variance-minimizing loss function, and propose an alternative, composite loss for stable online training in practice.
Specifically, we show that the learned light-field approximation is of sufficient quality for high-order bounces, allowing us to omit the error correction and thereby dramatically reduce the noise at the cost of negligible visible bias.
arXiv Detail & Related papers (2020-06-02T11:17:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.