Learning-based Design of Luenberger Observers for Autonomous Nonlinear
Systems
- URL: http://arxiv.org/abs/2210.01476v2
- Date: Wed, 5 Apr 2023 15:00:56 GMT
- Title: Learning-based Design of Luenberger Observers for Autonomous Nonlinear
Systems
- Authors: Muhammad Umar B. Niazi, John Cao, Xudong Sun, Amritam Das, Karl Henrik
Johansson
- Abstract summary: Luenberger observers for nonlinear systems involve transforming the state to an alternate coordinate system.
We propose a novel approach that uses supervised physics-informed neural networks to approximate both the transformation and its inverse.
Our method exhibits superior robustness capabilities to contemporary methods and demonstrates to both neural network's approximation errors and system uncertainties.
- Score: 5.953597709282766
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Designing Luenberger observers for nonlinear systems involves the challenging
task of transforming the state to an alternate coordinate system, possibly of
higher dimensions, where the system is asymptotically stable and linear up to
output injection. The observer then estimates the system's state in the
original coordinates by inverting the transformation map. However, finding a
suitable injective transformation whose inverse can be derived remains a
primary challenge for general nonlinear systems. We propose a novel approach
that uses supervised physics-informed neural networks to approximate both the
transformation and its inverse. Our method exhibits superior generalization
capabilities to contemporary methods and demonstrates robustness to both neural
network's approximation errors and system uncertainties.
Related papers
- KKL Observer Synthesis for Nonlinear Systems via Physics-Informed Learning [5.888531936968298]
We propose a novel learning approach for designing Kazantzis-Kravaris/Luenberger (KKL) observers for autonomous nonlinear systems.
The design of a KKL observer involves finding an injective map that transforms the system state into a higher-dimensional observer state.
We generate synthetic data for training by numerically solving the system and observer dynamics.
arXiv Detail & Related papers (2025-01-20T18:38:51Z) - Koopman-based Deep Learning for Nonlinear System Estimation [1.3791394805787949]
We present a novel data-driven linear estimator based on Koopman operator theory to extract meaningful finite-dimensional representations of complex non-linear systems.
Our estimator is also adaptive to a diffeomorphic transformation of the estimated nonlinear system, which enables it to compute optimal state estimates without re-learning.
arXiv Detail & Related papers (2024-05-01T16:49:54Z) - Nonlinear Discrete-Time Observers with Physics-Informed Neural Networks [0.0]
We use Physics-Informed Neural Networks (PINNs) to solve the discrete-time nonlinear observer state estimation problem.
The proposed PINN approach aims at learning a nonlinear state transformation map by solving a system of inhomogeneous functional equations.
arXiv Detail & Related papers (2024-02-19T18:47:56Z) - Adaptive Meta-Learning-Based KKL Observer Design for Nonlinear Dynamical
Systems [0.0]
This paper presents a novel approach to observer design for nonlinear dynamical systems through meta-learning.
We introduce a framework that leverages information from measurements of the system output to design a learning-based KKL observer capable of online adaptation to a variety of system conditions and attributes.
arXiv Detail & Related papers (2023-10-30T12:25:14Z) - Model Reduction for Nonlinear Systems by Balanced Truncation of State
and Gradient Covariance [0.0]
We find low-dimensional systems of coordinates for model reduction that balance adjoint-based information about the system's sensitivity with the variance of states along trajectories.
We demonstrate these techniques on a simple, yet challenging three-dimensional system and a nonlinear axisymmetric jet flow simulation with $105$ state variables.
arXiv Detail & Related papers (2022-07-28T21:45:08Z) - Structure-Preserving Learning Using Gaussian Processes and Variational
Integrators [62.31425348954686]
We propose the combination of a variational integrator for the nominal dynamics of a mechanical system and learning residual dynamics with Gaussian process regression.
We extend our approach to systems with known kinematic constraints and provide formal bounds on the prediction uncertainty.
arXiv Detail & Related papers (2021-12-10T11:09:29Z) - Supervised DKRC with Images for Offline System Identification [77.34726150561087]
Modern dynamical systems are becoming increasingly non-linear and complex.
There is a need for a framework to model these systems in a compact and comprehensive representation for prediction and control.
Our approach learns these basis functions using a supervised learning approach.
arXiv Detail & Related papers (2021-09-06T04:39:06Z) - Pushing the Envelope of Rotation Averaging for Visual SLAM [69.7375052440794]
We propose a novel optimization backbone for visual SLAM systems.
We leverage averaging to improve the accuracy, efficiency and robustness of conventional monocular SLAM systems.
Our approach can exhibit up to 10x faster with comparable accuracy against the state-art on public benchmarks.
arXiv Detail & Related papers (2020-11-02T18:02:26Z) - Learning the Linear Quadratic Regulator from Nonlinear Observations [135.66883119468707]
We introduce a new problem setting for continuous control called the LQR with Rich Observations, or RichLQR.
In our setting, the environment is summarized by a low-dimensional continuous latent state with linear dynamics and quadratic costs.
Our results constitute the first provable sample complexity guarantee for continuous control with an unknown nonlinearity in the system model and general function approximation.
arXiv Detail & Related papers (2020-10-08T07:02:47Z) - Attention that does not Explain Away [54.42960937271612]
Models based on the Transformer architecture have achieved better accuracy than the ones based on competing architectures for a large set of tasks.
A unique feature of the Transformer is its universal application of a self-attention mechanism, which allows for free information flow at arbitrary distances.
We propose a doubly-normalized attention scheme that is simple to implement and provides theoretical guarantees for avoiding the "explaining away" effect.
arXiv Detail & Related papers (2020-09-29T21:05:39Z) - Active Learning for Nonlinear System Identification with Guarantees [102.43355665393067]
We study a class of nonlinear dynamical systems whose state transitions depend linearly on a known feature embedding of state-action pairs.
We propose an active learning approach that achieves this by repeating three steps: trajectory planning, trajectory tracking, and re-estimation of the system from all available data.
We show that our method estimates nonlinear dynamical systems at a parametric rate, similar to the statistical rate of standard linear regression.
arXiv Detail & Related papers (2020-06-18T04:54:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.