Neural Operators of Backstepping Controller and Observer Gain Functions
for Reaction-Diffusion PDEs
- URL: http://arxiv.org/abs/2303.10506v1
- Date: Sat, 18 Mar 2023 21:55:44 GMT
- Title: Neural Operators of Backstepping Controller and Observer Gain Functions
for Reaction-Diffusion PDEs
- Authors: Miroslav Krstic, Luke Bhan, Yuanyuan Shi
- Abstract summary: We develop the neural operators for PDE backstepping designs for first order hyperbolic PDEs.
Here we extend this framework to the more complex class of parabolic PDEs.
We prove stability in closed loop under gains produced by neural operators.
- Score: 2.094821665776961
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unlike ODEs, whose models involve system matrices and whose controllers
involve vector or matrix gains, PDE models involve functions in those roles
functional coefficients, dependent on the spatial variables, and gain functions
dependent on space as well. The designs of gains for controllers and observers
for PDEs, such as PDE backstepping, are mappings of system model functions into
gain functions. These infinite dimensional nonlinear operators are given in an
implicit form through PDEs, in spatial variables, which need to be solved to
determine the gain function for each new functional coefficient of the PDE. The
need for solving such PDEs can be eliminated by learning and approximating the
said design mapping in the form of a neural operator. Learning the neural
operator requires a sufficient number of prior solutions for the design PDEs,
offline, as well as the training of the operator. In recent work, we developed
the neural operators for PDE backstepping designs for first order hyperbolic
PDEs. Here we extend this framework to the more complex class of parabolic
PDEs. The key theoretical question is whether the controllers are still
stabilizing, and whether the observers are still convergent, if they employ the
approximate functional gains generated by the neural operator. We provide
affirmative answers to these questions, namely, we prove stability in closed
loop under gains produced by neural operators. We illustrate the theoretical
results with numerical tests and publish our code on github. The neural
operators are three orders of magnitude faster in generating gain functions
than PDE solvers for such gain functions. This opens up the opportunity for the
use of this neural operator methodology in adaptive control and in gain
scheduling control for nonlinear PDEs.
Related papers
- DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [63.5925701087252]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.
To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.
Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - Unisolver: PDE-Conditional Transformers Are Universal PDE Solvers [55.0876373185983]
We present the Universal PDE solver (Unisolver) capable of solving a wide scope of PDEs.
Our key finding is that a PDE solution is fundamentally under the control of a series of PDE components.
Unisolver achieves consistent state-of-the-art results on three challenging large-scale benchmarks.
arXiv Detail & Related papers (2024-05-27T15:34:35Z) - Pretraining Codomain Attention Neural Operators for Solving Multiphysics PDEs [85.40198664108624]
We propose Codomain Attention Neural Operator (CoDA-NO) to solve multiphysics problems with PDEs.
CoDA-NO tokenizes functions along the codomain or channel space, enabling self-supervised learning or pretraining of multiple PDE systems.
We find CoDA-NO to outperform existing methods by over 36% on complex downstream tasks with limited data.
arXiv Detail & Related papers (2024-03-19T08:56:20Z) - Adaptive Neural-Operator Backstepping Control of a Benchmark Hyperbolic
PDE [3.3044728148521623]
We present the first result on applying NOs in adaptive PDE control, presented for a benchmark 1-D hyperbolic PDE with recirculation.
We also present numerical simulations demonstrating stability and observe speedups up to three orders of magnitude.
arXiv Detail & Related papers (2024-01-15T17:52:15Z) - Gain Scheduling with a Neural Operator for a Transport PDE with
Nonlinear Recirculation [1.124958340749622]
Gain-scheduling (GS) nonlinear design is the simplest approach to the design of nonlinear feedback.
Recent introduced neural operators (NO) can be trained to produce the gain functions, rapidly in real time, for each state value.
We establish local stabilization of hyperbolic PDEs with nonlinear recirculation.
arXiv Detail & Related papers (2024-01-04T19:45:27Z) - Deep Equilibrium Based Neural Operators for Steady-State PDEs [100.88355782126098]
We study the benefits of weight-tied neural network architectures for steady-state PDEs.
We propose FNO-DEQ, a deep equilibrium variant of the FNO architecture that directly solves for the solution of a steady-state PDE.
arXiv Detail & Related papers (2023-11-30T22:34:57Z) - Deep Learning of Delay-Compensated Backstepping for Reaction-Diffusion
PDEs [2.2869182375774613]
Multiple operators arise in the control of PDE systems from distinct PDE classes.
DeepONet-approximated nonlinear operator is a cascade/composition of the operators defined by one hyperbolic PDE of the Goursat form and one parabolic PDE on a rectangle.
For the delay-compensated PDE backstepping controller, we guarantee exponential stability in the $L2$ norm of the plant state and the $H1$ norm of the input delay state.
arXiv Detail & Related papers (2023-08-21T06:42:33Z) - Neural Operators for PDE Backstepping Control of First-Order Hyperbolic PIDE with Recycle and Delay [9.155455179145473]
We extend the recently introduced DeepONet operator-learning framework for PDE control to an advanced hyperbolic class.
The PDE backstepping design produces gain functions that are outputs of a nonlinear operator.
The operator is approximated with a DeepONet neural network to a degree of accuracy that is provably arbitrarily tight.
arXiv Detail & Related papers (2023-07-21T08:57:16Z) - Solving High-Dimensional PDEs with Latent Spectral Models [74.1011309005488]
We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs.
Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space.
LSM achieves consistent state-of-the-art and yields a relative gain of 11.5% averaged on seven benchmarks.
arXiv Detail & Related papers (2023-01-30T04:58:40Z) - Physics-Informed Neural Operator for Learning Partial Differential
Equations [55.406540167010014]
PINO is the first hybrid approach incorporating data and PDE constraints at different resolutions to learn the operator.
The resulting PINO model can accurately approximate the ground-truth solution operator for many popular PDE families.
arXiv Detail & Related papers (2021-11-06T03:41:34Z) - PDE-constrained Models with Neural Network Terms: Optimization and
Global Convergence [0.0]
Recent research has used deep learning to develop partial differential equation (PDE) models in science and engineering.
We rigorously study the optimization of a class of linear elliptic PDEs with neural network terms.
We train a neural network model for an application in fluid mechanics, in which the neural network functions as a closure model for the Reynolds-averaged Navier-Stokes equations.
arXiv Detail & Related papers (2021-05-18T16:04:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.