Discrete-Time Nonlinear Feedback Linearization via Physics-Informed
Machine Learning
- URL: http://arxiv.org/abs/2303.08884v1
- Date: Wed, 15 Mar 2023 19:03:23 GMT
- Title: Discrete-Time Nonlinear Feedback Linearization via Physics-Informed
Machine Learning
- Authors: Hector Vargas Alvarez, Gianluca Fabiani, Nikolaos Kazantzis,
Constantinos Siettos, Ioannis G. Kevrekidis
- Abstract summary: We present a physics-informed machine learning scheme for the feedback linearization of nonlinear systems.
We show that the proposed PIML outperforms the traditional numerical implementation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a physics-informed machine learning (PIML) scheme for the feedback
linearization of nonlinear discrete-time dynamical systems. The PIML finds the
nonlinear transformation law, thus ensuring stability via pole placement, in
one step. In order to facilitate convergence in the presence of steep gradients
in the nonlinear transformation law, we address a greedy-wise training
procedure. We assess the performance of the proposed PIML approach via a
benchmark nonlinear discrete map for which the feedback linearization
transformation law can be derived analytically; the example is characterized by
steep gradients, due to the presence of singularities, in the domain of
interest. We show that the proposed PIML outperforms, in terms of numerical
approximation accuracy, the traditional numerical implementation, which
involves the construction--and the solution in terms of the coefficients of a
power-series expansion--of a system of homological equations as well as the
implementation of the PIML in the entire domain, thus highlighting the
importance of continuation techniques in the training procedure of PIML.
Related papers
- Nonlinear Discrete-Time Observers with Physics-Informed Neural Networks [0.0]
We use Physics-Informed Neural Networks (PINNs) to solve the discrete-time nonlinear observer state estimation problem.
The proposed PINN approach aims at learning a nonlinear state transformation map by solving a system of inhomogeneous functional equations.
arXiv Detail & Related papers (2024-02-19T18:47:56Z) - Estimation Sample Complexity of a Class of Nonlinear Continuous-time Systems [0.0]
We present a method of parameter estimation for large class of nonlinear systems, namely those in which the state consists of output derivatives and the flow is linear in the parameter.
The method, which solves for the unknown parameter by directly inverting the dynamics using regularized linear regression, is based on new design and analysis ideas for differentiation filtering and regularized least squares.
arXiv Detail & Related papers (2023-12-08T21:42:11Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Constrained Optimization via Exact Augmented Lagrangian and Randomized
Iterative Sketching [55.28394191394675]
We develop an adaptive inexact Newton method for equality-constrained nonlinear, nonIBS optimization problems.
We demonstrate the superior performance of our method on benchmark nonlinear problems, constrained logistic regression with data from LVM, and a PDE-constrained problem.
arXiv Detail & Related papers (2023-05-28T06:33:37Z) - The Power of Learned Locally Linear Models for Nonlinear Policy
Optimization [26.45568696453259]
This paper conducts a rigorous analysis of a simplified variant of this strategy for general nonlinear systems.
We analyze an algorithm which iterates between estimating local linear models of nonlinear system dynamics and performing $mathttiLQR$-like policy updates.
arXiv Detail & Related papers (2023-05-16T17:13:00Z) - Linear Convergence of Natural Policy Gradient Methods with Log-Linear
Policies [115.86431674214282]
We consider infinite-horizon discounted Markov decision processes and study the convergence rates of the natural policy gradient (NPG) and the Q-NPG methods with the log-linear policy class.
We show that both methods attain linear convergence rates and $mathcalO (1/epsilon2)$ sample complexities using a simple, non-adaptive geometrically increasing step size.
arXiv Detail & Related papers (2022-10-04T06:17:52Z) - Improving Metric Dimensionality Reduction with Distributed Topology [68.8204255655161]
DIPOLE is a dimensionality-reduction post-processing step that corrects an initial embedding by minimizing a loss functional with both a local, metric term and a global, topological term.
We observe that DIPOLE outperforms popular methods like UMAP, t-SNE, and Isomap on a number of popular datasets.
arXiv Detail & Related papers (2021-06-14T17:19:44Z) - Online Stochastic Gradient Descent Learns Linear Dynamical Systems from
A Single Trajectory [1.52292571922932]
We show that if the unknown weight matrices describing the system are in Brunovsky canonical form, we can efficiently estimate the ground truth unknown of the system.
Specifically, by deriving concrete bounds, we show that SGD converges linearly in expectation to any arbitrary small Frobenius norm distance from the ground truth weights.
arXiv Detail & Related papers (2021-02-23T17:48:39Z) - Fundamental limits and algorithms for sparse linear regression with
sublinear sparsity [16.3460693863947]
We establish exact expressions for the normalized mutual information and minimum mean-square-error (MMSE) of sparse linear regression.
We show how to modify the existing well-known AMP algorithms for linear regimes to sub-linear ones.
arXiv Detail & Related papers (2021-01-27T01:27:03Z) - The Connection between Discrete- and Continuous-Time Descriptions of
Gaussian Continuous Processes [60.35125735474386]
We show that discretizations yielding consistent estimators have the property of invariance under coarse-graining'
This result explains why combining differencing schemes for derivatives reconstruction and local-in-time inference approaches does not work for time series analysis of second or higher order differential equations.
arXiv Detail & Related papers (2021-01-16T17:11:02Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.