Nonlinear Discrete-Time Observers with Physics-Informed Neural Networks
- URL: http://arxiv.org/abs/2402.12360v1
- Date: Mon, 19 Feb 2024 18:47:56 GMT
- Title: Nonlinear Discrete-Time Observers with Physics-Informed Neural Networks
- Authors: Hector Vargas Alvarez, Gianluca Fabiani, Ioannis G. Kevrekidis,
Nikolaos Kazantzis, Constantinos Siettos
- Abstract summary: We use Physics-Informed Neural Networks (PINNs) to solve the discrete-time nonlinear observer state estimation problem.
The proposed PINN approach aims at learning a nonlinear state transformation map by solving a system of inhomogeneous functional equations.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We use Physics-Informed Neural Networks (PINNs) to solve the discrete-time
nonlinear observer state estimation problem. Integrated within a single-step
exact observer linearization framework, the proposed PINN approach aims at
learning a nonlinear state transformation map by solving a system of
inhomogeneous functional equations. The performance of the proposed PINN
approach is assessed via two illustrative case studies for which the observer
linearizing transformation map can be derived analytically. We also perform an
uncertainty quantification analysis for the proposed PINN scheme and we compare
it with conventional power-series numerical implementations, which rely on the
computation of a power series solution.
Related papers
- Learning solutions of parametric Navier-Stokes with physics-informed
neural networks [0.3989223013441816]
We leverageformed-Informed Neural Networks (PINs) to learn solution functions of parametric Navier-Stokes equations (NSE)
We consider the parameter(s) of interest as inputs of PINs along with coordinates, and train PINs on numerical solutions of parametric-PDES for instances of the parameters.
We show that our proposed approach results in optimizing PINN models that learn the solution functions while making sure that flow predictions are in line with conservational laws of mass and momentum.
arXiv Detail & Related papers (2024-02-05T16:19:53Z) - Error estimation for physics-informed neural networks with implicit
Runge-Kutta methods [0.0]
In this work, we propose to use the NN's predictions in a high-order implicit Runge-Kutta (IRK) method.
The residuals in the implicit system of equations can be related to the NN's prediction error, hence, we can provide an error estimate at several points along a trajectory.
We find that this error estimate highly correlates with the NN's prediction error and that increasing the order of the IRK method improves this estimate.
arXiv Detail & Related papers (2024-01-10T15:18:56Z) - Error Analysis of Physics-Informed Neural Networks for Approximating
Dynamic PDEs of Second Order in Time [1.123111111659464]
We consider the approximation of a class of dynamic partial differential equations (PDE) of second order in time by the physics-informed neural network (PINN) approach.
Our analyses show that, with feed-forward neural networks having two hidden layers and the $tanh$ activation function, the PINN approximation errors for the solution field can be effectively bounded by the training loss and the number of training data points.
We present ample numerical experiments with the new PINN algorithm for the wave equation, the Sine-Gordon equation and the linear elastodynamic equation, which show that the method can capture
arXiv Detail & Related papers (2023-03-22T00:51:11Z) - Discrete-Time Nonlinear Feedback Linearization via Physics-Informed
Machine Learning [0.0]
We present a physics-informed machine learning scheme for the feedback linearization of nonlinear systems.
We show that the proposed PIML outperforms the traditional numerical implementation.
arXiv Detail & Related papers (2023-03-15T19:03:23Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - On Convergence of Training Loss Without Reaching Stationary Points [62.41370821014218]
We show that Neural Network weight variables do not converge to stationary points where the gradient the loss function vanishes.
We propose a new perspective based on ergodic theory dynamical systems.
arXiv Detail & Related papers (2021-10-12T18:12:23Z) - Quantum Algorithms for Data Representation and Analysis [68.754953879193]
We provide quantum procedures that speed-up the solution of eigenproblems for data representation in machine learning.
The power and practical use of these subroutines is shown through new quantum algorithms, sublinear in the input matrix's size, for principal component analysis, correspondence analysis, and latent semantic analysis.
Results show that the run-time parameters that do not depend on the input's size are reasonable and that the error on the computed model is small, allowing for competitive classification performances.
arXiv Detail & Related papers (2021-04-19T00:41:43Z) - Neural Dynamic Mode Decomposition for End-to-End Modeling of Nonlinear
Dynamics [49.41640137945938]
We propose a neural dynamic mode decomposition for estimating a lift function based on neural networks.
With our proposed method, the forecast error is backpropagated through the neural networks and the spectral decomposition.
Our experiments demonstrate the effectiveness of our proposed method in terms of eigenvalue estimation and forecast performance.
arXiv Detail & Related papers (2020-12-11T08:34:26Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.