Neural Operators for PDE Backstepping Control of First-Order Hyperbolic PIDE with Recycle and Delay
- URL: http://arxiv.org/abs/2307.11436v2
- Date: Fri, 14 Jun 2024 15:17:20 GMT
- Title: Neural Operators for PDE Backstepping Control of First-Order Hyperbolic PIDE with Recycle and Delay
- Authors: Jie Qi, Jing Zhang, Miroslav Krstic,
- Abstract summary: We extend the recently introduced DeepONet operator-learning framework for PDE control to an advanced hyperbolic class.
The PDE backstepping design produces gain functions that are outputs of a nonlinear operator.
The operator is approximated with a DeepONet neural network to a degree of accuracy that is provably arbitrarily tight.
- Score: 9.155455179145473
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The recently introduced DeepONet operator-learning framework for PDE control is extended from the results for basic hyperbolic and parabolic PDEs to an advanced hyperbolic class that involves delays on both the state and the system output or input. The PDE backstepping design produces gain functions that are outputs of a nonlinear operator, mapping functions on a spatial domain into functions on a spatial domain, and where this gain-generating operator's inputs are the PDE's coefficients. The operator is approximated with a DeepONet neural network to a degree of accuracy that is provably arbitrarily tight. Once we produce this approximation-theoretic result in infinite dimension, with it we establish stability in closed loop under feedback that employs approximate gains. In addition to supplying such results under full-state feedback, we also develop DeepONet-approximated observers and output-feedback laws and prove their own stabilizing properties under neural operator approximations. With numerical simulations we illustrate the theoretical results and quantify the numerical effort savings, which are of two orders of magnitude, thanks to replacing the numerical PDE solving with the DeepONet.
Related papers
- Adaptive control of reaction-diffusion PDEs via neural operator-approximated gain kernels [3.3044728148521623]
Neural operator approximations of the gain kernels in PDE backstepping have emerged as a viable method for implementing controllers in real time.
We extend the neural operator methodology from adaptive control of a hyperbolic PDE to adaptive control of a benchmark parabolic PDE.
We prove global stability and regulation of the plant state for a Lyapunov design of parameter adaptation.
arXiv Detail & Related papers (2024-07-01T19:24:36Z) - Adaptive Neural-Operator Backstepping Control of a Benchmark Hyperbolic
PDE [3.3044728148521623]
We present the first result on applying NOs in adaptive PDE control, presented for a benchmark 1-D hyperbolic PDE with recirculation.
We also present numerical simulations demonstrating stability and observe speedups up to three orders of magnitude.
arXiv Detail & Related papers (2024-01-15T17:52:15Z) - Backstepping Neural Operators for $2\times 2$ Hyperbolic PDEs [2.034806188092437]
We study the subject of approximating systems of gain kernel PDEs for hyperbolic PDE plants.
Engineering applications include oil drilling, the Saint-Venant model of shallow water waves, and the Aw-Rascle-Zhang model of stop-and-go instability in congested traffic flow.
arXiv Detail & Related papers (2023-12-28T00:49:41Z) - Deep Equilibrium Based Neural Operators for Steady-State PDEs [100.88355782126098]
We study the benefits of weight-tied neural network architectures for steady-state PDEs.
We propose FNO-DEQ, a deep equilibrium variant of the FNO architecture that directly solves for the solution of a steady-state PDE.
arXiv Detail & Related papers (2023-11-30T22:34:57Z) - Deep Learning of Delay-Compensated Backstepping for Reaction-Diffusion
PDEs [2.2869182375774613]
Multiple operators arise in the control of PDE systems from distinct PDE classes.
DeepONet-approximated nonlinear operator is a cascade/composition of the operators defined by one hyperbolic PDE of the Goursat form and one parabolic PDE on a rectangle.
For the delay-compensated PDE backstepping controller, we guarantee exponential stability in the $L2$ norm of the plant state and the $H1$ norm of the input delay state.
arXiv Detail & Related papers (2023-08-21T06:42:33Z) - Neural Operators of Backstepping Controller and Observer Gain Functions
for Reaction-Diffusion PDEs [2.094821665776961]
We develop the neural operators for PDE backstepping designs for first order hyperbolic PDEs.
Here we extend this framework to the more complex class of parabolic PDEs.
We prove stability in closed loop under gains produced by neural operators.
arXiv Detail & Related papers (2023-03-18T21:55:44Z) - Solving High-Dimensional PDEs with Latent Spectral Models [74.1011309005488]
We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs.
Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space.
LSM achieves consistent state-of-the-art and yields a relative gain of 11.5% averaged on seven benchmarks.
arXiv Detail & Related papers (2023-01-30T04:58:40Z) - Machine Learning Accelerated PDE Backstepping Observers [56.65019598237507]
We propose a framework for accelerating PDE observer computations using learning-based approaches.
We employ the recently-developed Fourier Neural Operator (FNO) to learn the functional mapping from the initial observer state to the state estimate.
We consider the state estimation for three benchmark PDE examples motivated by applications.
arXiv Detail & Related papers (2022-11-28T04:06:43Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Physics-Informed Neural Operator for Learning Partial Differential
Equations [55.406540167010014]
PINO is the first hybrid approach incorporating data and PDE constraints at different resolutions to learn the operator.
The resulting PINO model can accurately approximate the ground-truth solution operator for many popular PDE families.
arXiv Detail & Related papers (2021-11-06T03:41:34Z) - Incorporating NODE with Pre-trained Neural Differential Operator for
Learning Dynamics [73.77459272878025]
We propose to enhance the supervised signal in learning dynamics by pre-training a neural differential operator (NDO)
NDO is pre-trained on a class of symbolic functions, and it learns the mapping between the trajectory samples of these functions to their derivatives.
We provide theoretical guarantee on that the output of NDO can well approximate the ground truth derivatives by proper tuning the complexity of the library.
arXiv Detail & Related papers (2021-06-08T08:04:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.