Machine Learning Accelerated PDE Backstepping Observers
- URL: http://arxiv.org/abs/2211.15044v1
- Date: Mon, 28 Nov 2022 04:06:43 GMT
- Title: Machine Learning Accelerated PDE Backstepping Observers
- Authors: Yuanyuan Shi, Zongyi Li, Huan Yu, Drew Steeves, Anima Anandkumar,
Miroslav Krstic
- Abstract summary: We propose a framework for accelerating PDE observer computations using learning-based approaches.
We employ the recently-developed Fourier Neural Operator (FNO) to learn the functional mapping from the initial observer state to the state estimate.
We consider the state estimation for three benchmark PDE examples motivated by applications.
- Score: 56.65019598237507
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: State estimation is important for a variety of tasks, from forecasting to
substituting for unmeasured states in feedback controllers. Performing
real-time state estimation for PDEs using provably and rapidly converging
observers, such as those based on PDE backstepping, is computationally
expensive and in many cases prohibitive. We propose a framework for
accelerating PDE observer computations using learning-based approaches that are
much faster while maintaining accuracy. In particular, we employ the
recently-developed Fourier Neural Operator (FNO) to learn the functional
mapping from the initial observer state and boundary measurements to the state
estimate. By employing backstepping observer gains for previously-designed
observers with particular convergence rate guarantees, we provide numerical
experiments that evaluate the increased computational efficiency gained with
FNO. We consider the state estimation for three benchmark PDE examples
motivated by applications: first, for a reaction-diffusion (parabolic) PDE
whose state is estimated with an exponential rate of convergence; second, for a
parabolic PDE with exact prescribed-time estimation; and, third, for a pair of
coupled first-order hyperbolic PDEs that modeling traffic flow density and
velocity. The ML-accelerated observers trained on simulation data sets for
these PDEs achieves up to three orders of magnitude improvement in
computational speed compared to classical methods. This demonstrates the
attractiveness of the ML-accelerated observers for real-time state estimation
and control.
Related papers
- Adaptive Neural-Operator Backstepping Control of a Benchmark Hyperbolic
PDE [3.3044728148521623]
We present the first result on applying NOs in adaptive PDE control, presented for a benchmark 1-D hyperbolic PDE with recirculation.
We also present numerical simulations demonstrating stability and observe speedups up to three orders of magnitude.
arXiv Detail & Related papers (2024-01-15T17:52:15Z) - Moving-Horizon Estimators for Hyperbolic and Parabolic PDEs in 1-D [2.819498895723555]
We introduce moving-horizon estimators for PDEs to remove the need for a numerical solution of an observer PDE in real time.
We accomplish this using the PDE backstepping method which, for certain classes of both hyperbolic and parabolic PDEs, produces moving-horizon state estimates explicitly.
arXiv Detail & Related papers (2024-01-04T19:55:43Z) - Deep Equilibrium Based Neural Operators for Steady-State PDEs [100.88355782126098]
We study the benefits of weight-tied neural network architectures for steady-state PDEs.
We propose FNO-DEQ, a deep equilibrium variant of the FNO architecture that directly solves for the solution of a steady-state PDE.
arXiv Detail & Related papers (2023-11-30T22:34:57Z) - PDE-Refiner: Achieving Accurate Long Rollouts with Neural PDE Solvers [40.097474800631]
Time-dependent partial differential equations (PDEs) are ubiquitous in science and engineering.
Deep neural network based surrogates have gained increased interest.
arXiv Detail & Related papers (2023-08-10T17:53:05Z) - Neural Operators for PDE Backstepping Control of First-Order Hyperbolic PIDE with Recycle and Delay [9.155455179145473]
We extend the recently introduced DeepONet operator-learning framework for PDE control to an advanced hyperbolic class.
The PDE backstepping design produces gain functions that are outputs of a nonlinear operator.
The operator is approximated with a DeepONet neural network to a degree of accuracy that is provably arbitrarily tight.
arXiv Detail & Related papers (2023-07-21T08:57:16Z) - FaDIn: Fast Discretized Inference for Hawkes Processes with General
Parametric Kernels [82.53569355337586]
This work offers an efficient solution to temporal point processes inference using general parametric kernels with finite support.
The method's effectiveness is evaluated by modeling the occurrence of stimuli-induced patterns from brain signals recorded with magnetoencephalography (MEG)
Results show that the proposed approach leads to an improved estimation of pattern latency than the state-of-the-art.
arXiv Detail & Related papers (2022-10-10T12:35:02Z) - Robust and Adaptive Temporal-Difference Learning Using An Ensemble of
Gaussian Processes [70.80716221080118]
The paper takes a generative perspective on policy evaluation via temporal-difference (TD) learning.
The OS-GPTD approach is developed to estimate the value function for a given policy by observing a sequence of state-reward pairs.
To alleviate the limited expressiveness associated with a single fixed kernel, a weighted ensemble (E) of GP priors is employed to yield an alternative scheme.
arXiv Detail & Related papers (2021-12-01T23:15:09Z) - Dynamic Iterative Refinement for Efficient 3D Hand Pose Estimation [87.54604263202941]
We propose a tiny deep neural network of which partial layers are iteratively exploited for refining its previous estimations.
We employ learned gating criteria to decide whether to exit from the weight-sharing loop, allowing per-sample adaptation in our model.
Our method consistently outperforms state-of-the-art 2D/3D hand pose estimation approaches in terms of both accuracy and efficiency for widely used benchmarks.
arXiv Detail & Related papers (2021-11-11T23:31:34Z) - Long-time integration of parametric evolution equations with
physics-informed DeepONets [0.0]
We introduce an effective framework for learning infinite-dimensional operators that map random initial conditions to associated PDE solutions within a short time interval.
Global long-time predictions across a range of initial conditions can be then obtained by iteratively evaluating the trained model.
This introduces a new approach to temporal domain decomposition that is shown to be effective in performing accurate long-time simulations.
arXiv Detail & Related papers (2021-06-09T20:46:17Z) - Critical Parameters for Scalable Distributed Learning with Large Batches
and Asynchronous Updates [67.19481956584465]
It has been experimentally observed that the efficiency of distributed training with saturation (SGD) depends decisively on the batch size and -- in implementations -- on the staleness.
We show that our results are tight and illustrate key findings in numerical experiments.
arXiv Detail & Related papers (2021-03-03T12:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.