Deep Learning of Delay-Compensated Backstepping for Reaction-Diffusion
PDEs
- URL: http://arxiv.org/abs/2308.10501v1
- Date: Mon, 21 Aug 2023 06:42:33 GMT
- Title: Deep Learning of Delay-Compensated Backstepping for Reaction-Diffusion
PDEs
- Authors: Shanshan Wang, Mamadou Diagne, Miroslav Krsti\'c
- Abstract summary: Multiple operators arise in the control of PDE systems from distinct PDE classes.
DeepONet-approximated nonlinear operator is a cascade/composition of the operators defined by one hyperbolic PDE of the Goursat form and one parabolic PDE on a rectangle.
For the delay-compensated PDE backstepping controller, we guarantee exponential stability in the $L2$ norm of the plant state and the $H1$ norm of the input delay state.
- Score: 2.2869182375774613
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks that approximate nonlinear function-to-function
mappings, i.e., operators, which are called DeepONet, have been demonstrated in
recent articles to be capable of encoding entire PDE control methodologies,
such as backstepping, so that, for each new functional coefficient of a PDE
plant, the backstepping gains are obtained through a simple function
evaluation. These initial results have been limited to single PDEs from a given
class, approximating the solutions of only single-PDE operators for the gain
kernels. In this paper we expand this framework to the approximation of
multiple (cascaded) nonlinear operators. Multiple operators arise in the
control of PDE systems from distinct PDE classes, such as the system in this
paper: a reaction-diffusion plant, which is a parabolic PDE, with input delay,
which is a hyperbolic PDE. The DeepONet-approximated nonlinear operator is a
cascade/composition of the operators defined by one hyperbolic PDE of the
Goursat form and one parabolic PDE on a rectangle, both of which are bilinear
in their input functions and not explicitly solvable. For the delay-compensated
PDE backstepping controller, which employs the learned control operator,
namely, the approximated gain kernel, we guarantee exponential stability in the
$L^2$ norm of the plant state and the $H^1$ norm of the input delay state.
Simulations illustrate the contributed theory.
Related papers
- Unisolver: PDE-Conditional Transformers Are Universal PDE Solvers [55.0876373185983]
We present the Universal PDE solver (Unisolver) capable of solving a wide scope of PDEs.
Our key finding is that a PDE solution is fundamentally under the control of a series of PDE components.
Unisolver achieves consistent state-of-the-art results on three challenging large-scale benchmarks.
arXiv Detail & Related papers (2024-05-27T15:34:35Z) - Adaptive Neural-Operator Backstepping Control of a Benchmark Hyperbolic
PDE [3.3044728148521623]
We present the first result on applying NOs in adaptive PDE control, presented for a benchmark 1-D hyperbolic PDE with recirculation.
We also present numerical simulations demonstrating stability and observe speedups up to three orders of magnitude.
arXiv Detail & Related papers (2024-01-15T17:52:15Z) - Gain Scheduling with a Neural Operator for a Transport PDE with
Nonlinear Recirculation [1.124958340749622]
Gain-scheduling (GS) nonlinear design is the simplest approach to the design of nonlinear feedback.
Recent introduced neural operators (NO) can be trained to produce the gain functions, rapidly in real time, for each state value.
We establish local stabilization of hyperbolic PDEs with nonlinear recirculation.
arXiv Detail & Related papers (2024-01-04T19:45:27Z) - Backstepping Neural Operators for $2\times 2$ Hyperbolic PDEs [2.034806188092437]
We study the subject of approximating systems of gain kernel PDEs for hyperbolic PDE plants.
Engineering applications include oil drilling, the Saint-Venant model of shallow water waves, and the Aw-Rascle-Zhang model of stop-and-go instability in congested traffic flow.
arXiv Detail & Related papers (2023-12-28T00:49:41Z) - Deep Equilibrium Based Neural Operators for Steady-State PDEs [100.88355782126098]
We study the benefits of weight-tied neural network architectures for steady-state PDEs.
We propose FNO-DEQ, a deep equilibrium variant of the FNO architecture that directly solves for the solution of a steady-state PDE.
arXiv Detail & Related papers (2023-11-30T22:34:57Z) - Neural Operators for PDE Backstepping Control of First-Order Hyperbolic PIDE with Recycle and Delay [9.155455179145473]
We extend the recently introduced DeepONet operator-learning framework for PDE control to an advanced hyperbolic class.
The PDE backstepping design produces gain functions that are outputs of a nonlinear operator.
The operator is approximated with a DeepONet neural network to a degree of accuracy that is provably arbitrarily tight.
arXiv Detail & Related papers (2023-07-21T08:57:16Z) - Neural Operators of Backstepping Controller and Observer Gain Functions
for Reaction-Diffusion PDEs [2.094821665776961]
We develop the neural operators for PDE backstepping designs for first order hyperbolic PDEs.
Here we extend this framework to the more complex class of parabolic PDEs.
We prove stability in closed loop under gains produced by neural operators.
arXiv Detail & Related papers (2023-03-18T21:55:44Z) - Solving High-Dimensional PDEs with Latent Spectral Models [74.1011309005488]
We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs.
Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space.
LSM achieves consistent state-of-the-art and yields a relative gain of 11.5% averaged on seven benchmarks.
arXiv Detail & Related papers (2023-01-30T04:58:40Z) - Lie Point Symmetry Data Augmentation for Neural PDE Solvers [69.72427135610106]
We present a method, which can partially alleviate this problem, by improving neural PDE solver sample complexity.
In the context of PDEs, it turns out that we are able to quantitatively derive an exhaustive list of data transformations.
We show how it can easily be deployed to improve neural PDE solver sample complexity by an order of magnitude.
arXiv Detail & Related papers (2022-02-15T18:43:17Z) - Physics-Informed Neural Operator for Learning Partial Differential
Equations [55.406540167010014]
PINO is the first hybrid approach incorporating data and PDE constraints at different resolutions to learn the operator.
The resulting PINO model can accurately approximate the ground-truth solution operator for many popular PDE families.
arXiv Detail & Related papers (2021-11-06T03:41:34Z) - dNNsolve: an efficient NN-based PDE solver [62.997667081978825]
We introduce dNNsolve, that makes use of dual Neural Networks to solve ODEs/PDEs.
We show that dNNsolve is capable of solving a broad range of ODEs/PDEs in 1, 2 and 3 spacetime dimensions.
arXiv Detail & Related papers (2021-03-15T19:14:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.