Adaptive Neural-Operator Backstepping Control of a Benchmark Hyperbolic
PDE
- URL: http://arxiv.org/abs/2401.07862v1
- Date: Mon, 15 Jan 2024 17:52:15 GMT
- Title: Adaptive Neural-Operator Backstepping Control of a Benchmark Hyperbolic
PDE
- Authors: Maxence Lamarque, Luke Bhan, Yuanyuan Shi, Miroslav Krstic
- Abstract summary: We present the first result on applying NOs in adaptive PDE control, presented for a benchmark 1-D hyperbolic PDE with recirculation.
We also present numerical simulations demonstrating stability and observe speedups up to three orders of magnitude.
- Score: 3.3044728148521623
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To stabilize PDEs, feedback controllers require gain kernel functions, which
are themselves governed by PDEs. Furthermore, these gain-kernel PDEs depend on
the PDE plants' functional coefficients. The functional coefficients in PDE
plants are often unknown. This requires an adaptive approach to PDE control,
i.e., an estimation of the plant coefficients conducted concurrently with
control, where a separate PDE for the gain kernel must be solved at each
timestep upon the update in the plant coefficient function estimate. Solving a
PDE at each timestep is computationally expensive and a barrier to the
implementation of real-time adaptive control of PDEs. Recently, results in
neural operator (NO) approximations of functional mappings have been introduced
into PDE control, for replacing the computation of the gain kernel with a
neural network that is trained, once offline, and reused in real-time for rapid
solution of the PDEs. In this paper, we present the first result on applying
NOs in adaptive PDE control, presented for a benchmark 1-D hyperbolic PDE with
recirculation. We establish global stabilization via Lyapunov analysis, in the
plant and parameter error states, and also present an alternative approach, via
passive identifiers, which avoids the strong assumptions on kernel
differentiability. We then present numerical simulations demonstrating
stability and observe speedups up to three orders of magnitude, highlighting
the real-time efficacy of neural operators in adaptive control. Our code
(Github) is made publicly available for future researchers.
Related papers
- Adaptive control of reaction-diffusion PDEs via neural operator-approximated gain kernels [3.3044728148521623]
Neural operator approximations of the gain kernels in PDE backstepping have emerged as a viable method for implementing controllers in real time.
We extend the neural operator methodology from adaptive control of a hyperbolic PDE to adaptive control of a benchmark parabolic PDE.
We prove global stability and regulation of the plant state for a Lyapunov design of parameter adaptation.
arXiv Detail & Related papers (2024-07-01T19:24:36Z) - Unisolver: PDE-Conditional Transformers Are Universal PDE Solvers [55.0876373185983]
We present the Universal PDE solver (Unisolver) capable of solving a wide scope of PDEs.
Our key finding is that a PDE solution is fundamentally under the control of a series of PDE components.
Unisolver achieves consistent state-of-the-art results on three challenging large-scale benchmarks.
arXiv Detail & Related papers (2024-05-27T15:34:35Z) - Gain Scheduling with a Neural Operator for a Transport PDE with
Nonlinear Recirculation [1.124958340749622]
Gain-scheduling (GS) nonlinear design is the simplest approach to the design of nonlinear feedback.
Recent introduced neural operators (NO) can be trained to produce the gain functions, rapidly in real time, for each state value.
We establish local stabilization of hyperbolic PDEs with nonlinear recirculation.
arXiv Detail & Related papers (2024-01-04T19:45:27Z) - Backstepping Neural Operators for $2\times 2$ Hyperbolic PDEs [2.034806188092437]
We study the subject of approximating systems of gain kernel PDEs for hyperbolic PDE plants.
Engineering applications include oil drilling, the Saint-Venant model of shallow water waves, and the Aw-Rascle-Zhang model of stop-and-go instability in congested traffic flow.
arXiv Detail & Related papers (2023-12-28T00:49:41Z) - Deep Equilibrium Based Neural Operators for Steady-State PDEs [100.88355782126098]
We study the benefits of weight-tied neural network architectures for steady-state PDEs.
We propose FNO-DEQ, a deep equilibrium variant of the FNO architecture that directly solves for the solution of a steady-state PDE.
arXiv Detail & Related papers (2023-11-30T22:34:57Z) - Elucidating the solution space of extended reverse-time SDE for
diffusion models [54.23536653351234]
Diffusion models (DMs) demonstrate potent image generation capabilities in various generative modeling tasks.
Their primary limitation lies in slow sampling speed, requiring hundreds or thousands of sequential function evaluations to generate high-quality images.
We formulate the sampling process as an extended reverse-time SDE, unifying prior explorations into ODEs and SDEs.
We devise fast and training-free samplers, ER-SDE-rs, achieving state-of-the-art performance across all samplers.
arXiv Detail & Related papers (2023-09-12T12:27:17Z) - Deep Learning of Delay-Compensated Backstepping for Reaction-Diffusion
PDEs [2.2869182375774613]
Multiple operators arise in the control of PDE systems from distinct PDE classes.
DeepONet-approximated nonlinear operator is a cascade/composition of the operators defined by one hyperbolic PDE of the Goursat form and one parabolic PDE on a rectangle.
For the delay-compensated PDE backstepping controller, we guarantee exponential stability in the $L2$ norm of the plant state and the $H1$ norm of the input delay state.
arXiv Detail & Related papers (2023-08-21T06:42:33Z) - Neural Operators of Backstepping Controller and Observer Gain Functions
for Reaction-Diffusion PDEs [2.094821665776961]
We develop the neural operators for PDE backstepping designs for first order hyperbolic PDEs.
Here we extend this framework to the more complex class of parabolic PDEs.
We prove stability in closed loop under gains produced by neural operators.
arXiv Detail & Related papers (2023-03-18T21:55:44Z) - Machine Learning Accelerated PDE Backstepping Observers [56.65019598237507]
We propose a framework for accelerating PDE observer computations using learning-based approaches.
We employ the recently-developed Fourier Neural Operator (FNO) to learn the functional mapping from the initial observer state to the state estimate.
We consider the state estimation for three benchmark PDE examples motivated by applications.
arXiv Detail & Related papers (2022-11-28T04:06:43Z) - Lie Point Symmetry Data Augmentation for Neural PDE Solvers [69.72427135610106]
We present a method, which can partially alleviate this problem, by improving neural PDE solver sample complexity.
In the context of PDEs, it turns out that we are able to quantitatively derive an exhaustive list of data transformations.
We show how it can easily be deployed to improve neural PDE solver sample complexity by an order of magnitude.
arXiv Detail & Related papers (2022-02-15T18:43:17Z) - Physics-Informed Neural Operator for Learning Partial Differential
Equations [55.406540167010014]
PINO is the first hybrid approach incorporating data and PDE constraints at different resolutions to learn the operator.
The resulting PINO model can accurately approximate the ground-truth solution operator for many popular PDE families.
arXiv Detail & Related papers (2021-11-06T03:41:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.