Gain Scheduling with a Neural Operator for a Transport PDE with
Nonlinear Recirculation
- URL: http://arxiv.org/abs/2401.02511v1
- Date: Thu, 4 Jan 2024 19:45:27 GMT
- Title: Gain Scheduling with a Neural Operator for a Transport PDE with
Nonlinear Recirculation
- Authors: Maxence Lamarque, Luke Bhan, Rafael Vazquez, and Miroslav Krstic
- Abstract summary: Gain-scheduling (GS) nonlinear design is the simplest approach to the design of nonlinear feedback.
Recent introduced neural operators (NO) can be trained to produce the gain functions, rapidly in real time, for each state value.
We establish local stabilization of hyperbolic PDEs with nonlinear recirculation.
- Score: 1.124958340749622
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To stabilize PDE models, control laws require space-dependent functional
gains mapped by nonlinear operators from the PDE functional coefficients. When
a PDE is nonlinear and its "pseudo-coefficient" functions are state-dependent,
a gain-scheduling (GS) nonlinear design is the simplest approach to the design
of nonlinear feedback. The GS version of PDE backstepping employs gains
obtained by solving a PDE at each value of the state. Performing such PDE
computations in real time may be prohibitive. The recently introduced neural
operators (NO) can be trained to produce the gain functions, rapidly in real
time, for each state value, without requiring a PDE solution. In this paper we
introduce NOs for GS-PDE backstepping. GS controllers act on the premise that
the state change is slow and, as a result, guarantee only local stability, even
for ODEs. We establish local stabilization of hyperbolic PDEs with nonlinear
recirculation using both a "full-kernel" approach and the "gain-only" approach
to gain operator approximation. Numerical simulations illustrate stabilization
and demonstrate speedup by three orders of magnitude over traditional PDE
gain-scheduling. Code (Github) for the numerical implementation is published to
enable exploration.
Related papers
- Unisolver: PDE-Conditional Transformers Are Universal PDE Solvers [55.0876373185983]
We present the Universal PDE solver (Unisolver) capable of solving a wide scope of PDEs.
Our key finding is that a PDE solution is fundamentally under the control of a series of PDE components.
Unisolver achieves consistent state-of-the-art results on three challenging large-scale benchmarks.
arXiv Detail & Related papers (2024-05-27T15:34:35Z) - Adaptive Neural-Operator Backstepping Control of a Benchmark Hyperbolic
PDE [3.3044728148521623]
We present the first result on applying NOs in adaptive PDE control, presented for a benchmark 1-D hyperbolic PDE with recirculation.
We also present numerical simulations demonstrating stability and observe speedups up to three orders of magnitude.
arXiv Detail & Related papers (2024-01-15T17:52:15Z) - Backstepping Neural Operators for $2\times 2$ Hyperbolic PDEs [2.034806188092437]
We study the subject of approximating systems of gain kernel PDEs for hyperbolic PDE plants.
Engineering applications include oil drilling, the Saint-Venant model of shallow water waves, and the Aw-Rascle-Zhang model of stop-and-go instability in congested traffic flow.
arXiv Detail & Related papers (2023-12-28T00:49:41Z) - Deep Equilibrium Based Neural Operators for Steady-State PDEs [100.88355782126098]
We study the benefits of weight-tied neural network architectures for steady-state PDEs.
We propose FNO-DEQ, a deep equilibrium variant of the FNO architecture that directly solves for the solution of a steady-state PDE.
arXiv Detail & Related papers (2023-11-30T22:34:57Z) - Elucidating the solution space of extended reverse-time SDE for
diffusion models [54.23536653351234]
Diffusion models (DMs) demonstrate potent image generation capabilities in various generative modeling tasks.
Their primary limitation lies in slow sampling speed, requiring hundreds or thousands of sequential function evaluations to generate high-quality images.
We formulate the sampling process as an extended reverse-time SDE, unifying prior explorations into ODEs and SDEs.
We devise fast and training-free samplers, ER-SDE-rs, achieving state-of-the-art performance across all samplers.
arXiv Detail & Related papers (2023-09-12T12:27:17Z) - Deep Learning of Delay-Compensated Backstepping for Reaction-Diffusion
PDEs [2.2869182375774613]
Multiple operators arise in the control of PDE systems from distinct PDE classes.
DeepONet-approximated nonlinear operator is a cascade/composition of the operators defined by one hyperbolic PDE of the Goursat form and one parabolic PDE on a rectangle.
For the delay-compensated PDE backstepping controller, we guarantee exponential stability in the $L2$ norm of the plant state and the $H1$ norm of the input delay state.
arXiv Detail & Related papers (2023-08-21T06:42:33Z) - Neural Operators for PDE Backstepping Control of First-Order Hyperbolic PIDE with Recycle and Delay [9.155455179145473]
We extend the recently introduced DeepONet operator-learning framework for PDE control to an advanced hyperbolic class.
The PDE backstepping design produces gain functions that are outputs of a nonlinear operator.
The operator is approximated with a DeepONet neural network to a degree of accuracy that is provably arbitrarily tight.
arXiv Detail & Related papers (2023-07-21T08:57:16Z) - Neural Operators of Backstepping Controller and Observer Gain Functions
for Reaction-Diffusion PDEs [2.094821665776961]
We develop the neural operators for PDE backstepping designs for first order hyperbolic PDEs.
Here we extend this framework to the more complex class of parabolic PDEs.
We prove stability in closed loop under gains produced by neural operators.
arXiv Detail & Related papers (2023-03-18T21:55:44Z) - Machine Learning Accelerated PDE Backstepping Observers [56.65019598237507]
We propose a framework for accelerating PDE observer computations using learning-based approaches.
We employ the recently-developed Fourier Neural Operator (FNO) to learn the functional mapping from the initial observer state to the state estimate.
We consider the state estimation for three benchmark PDE examples motivated by applications.
arXiv Detail & Related papers (2022-11-28T04:06:43Z) - Physics-Informed Neural Operator for Learning Partial Differential
Equations [55.406540167010014]
PINO is the first hybrid approach incorporating data and PDE constraints at different resolutions to learn the operator.
The resulting PINO model can accurately approximate the ground-truth solution operator for many popular PDE families.
arXiv Detail & Related papers (2021-11-06T03:41:34Z) - dNNsolve: an efficient NN-based PDE solver [62.997667081978825]
We introduce dNNsolve, that makes use of dual Neural Networks to solve ODEs/PDEs.
We show that dNNsolve is capable of solving a broad range of ODEs/PDEs in 1, 2 and 3 spacetime dimensions.
arXiv Detail & Related papers (2021-03-15T19:14:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.