Moving-Horizon Estimators for Hyperbolic and Parabolic PDEs in 1-D
- URL: http://arxiv.org/abs/2401.02516v1
- Date: Thu, 4 Jan 2024 19:55:43 GMT
- Title: Moving-Horizon Estimators for Hyperbolic and Parabolic PDEs in 1-D
- Authors: Luke Bhan, Yuanyuan Shi, Iasson Karafyllis, Miroslav Krstic, and James
B. Rawlings
- Abstract summary: We introduce moving-horizon estimators for PDEs to remove the need for a numerical solution of an observer PDE in real time.
We accomplish this using the PDE backstepping method which, for certain classes of both hyperbolic and parabolic PDEs, produces moving-horizon state estimates explicitly.
- Score: 2.819498895723555
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Observers for PDEs are themselves PDEs. Therefore, producing real time
estimates with such observers is computationally burdensome. For both
finite-dimensional and ODE systems, moving-horizon estimators (MHE) are
operators whose output is the state estimate, while their inputs are the
initial state estimate at the beginning of the horizon as well as the measured
output and input signals over the moving time horizon. In this paper we
introduce MHEs for PDEs which remove the need for a numerical solution of an
observer PDE in real time. We accomplish this using the PDE backstepping method
which, for certain classes of both hyperbolic and parabolic PDEs, produces
moving-horizon state estimates explicitly. Precisely, to explicitly produce the
state estimates, we employ a backstepping transformation of a hard-to-solve
observer PDE into a target observer PDE, which is explicitly solvable. The MHEs
we propose are not new observer designs but simply the explicit MHE
realizations, over a moving horizon of arbitrary length, of the existing
backstepping observers. Our PDE MHEs lack the optimality of the MHEs that arose
as duals of MPC, but they are given explicitly, even for PDEs. In the paper we
provide explicit formulae for MHEs for both hyperbolic and parabolic PDEs, as
well as simulation results that illustrate theoretically guaranteed convergence
of the MHEs.
Related papers
- Unisolver: PDE-Conditional Transformers Are Universal PDE Solvers [55.0876373185983]
We present the Universal PDE solver (Unisolver) capable of solving a wide scope of PDEs.
Our key finding is that a PDE solution is fundamentally under the control of a series of PDE components.
Unisolver achieves consistent state-of-the-art results on three challenging large-scale benchmarks.
arXiv Detail & Related papers (2024-05-27T15:34:35Z) - Gain Scheduling with a Neural Operator for a Transport PDE with
Nonlinear Recirculation [1.124958340749622]
Gain-scheduling (GS) nonlinear design is the simplest approach to the design of nonlinear feedback.
Recent introduced neural operators (NO) can be trained to produce the gain functions, rapidly in real time, for each state value.
We establish local stabilization of hyperbolic PDEs with nonlinear recirculation.
arXiv Detail & Related papers (2024-01-04T19:45:27Z) - Deep Equilibrium Based Neural Operators for Steady-State PDEs [100.88355782126098]
We study the benefits of weight-tied neural network architectures for steady-state PDEs.
We propose FNO-DEQ, a deep equilibrium variant of the FNO architecture that directly solves for the solution of a steady-state PDE.
arXiv Detail & Related papers (2023-11-30T22:34:57Z) - Elucidating the solution space of extended reverse-time SDE for
diffusion models [54.23536653351234]
Diffusion models (DMs) demonstrate potent image generation capabilities in various generative modeling tasks.
Their primary limitation lies in slow sampling speed, requiring hundreds or thousands of sequential function evaluations to generate high-quality images.
We formulate the sampling process as an extended reverse-time SDE, unifying prior explorations into ODEs and SDEs.
We devise fast and training-free samplers, ER-SDE-rs, achieving state-of-the-art performance across all samplers.
arXiv Detail & Related papers (2023-09-12T12:27:17Z) - Deep Learning of Delay-Compensated Backstepping for Reaction-Diffusion
PDEs [2.2869182375774613]
Multiple operators arise in the control of PDE systems from distinct PDE classes.
DeepONet-approximated nonlinear operator is a cascade/composition of the operators defined by one hyperbolic PDE of the Goursat form and one parabolic PDE on a rectangle.
For the delay-compensated PDE backstepping controller, we guarantee exponential stability in the $L2$ norm of the plant state and the $H1$ norm of the input delay state.
arXiv Detail & Related papers (2023-08-21T06:42:33Z) - Neural Operators for PDE Backstepping Control of First-Order Hyperbolic PIDE with Recycle and Delay [9.155455179145473]
We extend the recently introduced DeepONet operator-learning framework for PDE control to an advanced hyperbolic class.
The PDE backstepping design produces gain functions that are outputs of a nonlinear operator.
The operator is approximated with a DeepONet neural network to a degree of accuracy that is provably arbitrarily tight.
arXiv Detail & Related papers (2023-07-21T08:57:16Z) - Solving High-Dimensional PDEs with Latent Spectral Models [74.1011309005488]
We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs.
Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space.
LSM achieves consistent state-of-the-art and yields a relative gain of 11.5% averaged on seven benchmarks.
arXiv Detail & Related papers (2023-01-30T04:58:40Z) - Machine Learning Accelerated PDE Backstepping Observers [56.65019598237507]
We propose a framework for accelerating PDE observer computations using learning-based approaches.
We employ the recently-developed Fourier Neural Operator (FNO) to learn the functional mapping from the initial observer state to the state estimate.
We consider the state estimation for three benchmark PDE examples motivated by applications.
arXiv Detail & Related papers (2022-11-28T04:06:43Z) - Learning to Accelerate Partial Differential Equations via Latent Global
Evolution [64.72624347511498]
Latent Evolution of PDEs (LE-PDE) is a simple, fast and scalable method to accelerate the simulation and inverse optimization of PDEs.
We introduce new learning objectives to effectively learn such latent dynamics to ensure long-term stability.
We demonstrate up to 128x reduction in the dimensions to update, and up to 15x improvement in speed, while achieving competitive accuracy.
arXiv Detail & Related papers (2022-06-15T17:31:24Z) - Model Reduction of Swing Equations with Physics Informed PDE [3.3263205689999444]
This manuscript is the first step towards building a robust and efficient model reduction methodology to capture transient dynamics in a transmission level electric power system.
We show that, when properly coarse-grained, i.e. with the PDE coefficients and source terms extracted from a spatial convolution procedure of the respective discrete coefficients in the swing equations, the resulting PDE reproduces faithfully and efficiently the original swing dynamics.
arXiv Detail & Related papers (2021-10-26T22:46:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.