Efficient PDE-Constrained optimization under high-dimensional
uncertainty using derivative-informed neural operators
- URL: http://arxiv.org/abs/2305.20053v1
- Date: Wed, 31 May 2023 17:26:20 GMT
- Title: Efficient PDE-Constrained optimization under high-dimensional
uncertainty using derivative-informed neural operators
- Authors: Dingcheng Luo, Thomas O'Leary-Roseberry, Peng Chen, Omar Ghattas
- Abstract summary: We propose a novel framework for solving large-scale partial differential equations (PDEs) with high-dimensional random parameters.
We refer to such neural operators as multi-input reduced basis derivative informed neural operators (MR-DINOs)
We show that MR-DINOs offer $103$--$107 times$ reductions in execution time, and are able to produce OUU solutions of comparable accuracies to those from standard PDE based solutions.
- Score: 6.296120102486062
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel machine learning framework for solving optimization
problems governed by large-scale partial differential equations (PDEs) with
high-dimensional random parameters. Such optimization under uncertainty (OUU)
problems may be computational prohibitive using classical methods, particularly
when a large number of samples is needed to evaluate risk measures at every
iteration of an optimization algorithm, where each sample requires the solution
of an expensive-to-solve PDE. To address this challenge, we propose a new
neural operator approximation of the PDE solution operator that has the
combined merits of (1) accurate approximation of not only the map from the
joint inputs of random parameters and optimization variables to the PDE state,
but also its derivative with respect to the optimization variables, (2)
efficient construction of the neural network using reduced basis architectures
that are scalable to high-dimensional OUU problems, and (3) requiring only a
limited number of training data to achieve high accuracy for both the PDE
solution and the OUU solution. We refer to such neural operators as multi-input
reduced basis derivative informed neural operators (MR-DINOs). We demonstrate
the accuracy and efficiency our approach through several numerical experiments,
i.e. the risk-averse control of a semilinear elliptic PDE and the steady state
Navier--Stokes equations in two and three spatial dimensions, each involving
random field inputs. Across the examples, MR-DINOs offer $10^{3}$--$10^{7}
\times$ reductions in execution time, and are able to produce OUU solutions of
comparable accuracies to those from standard PDE based solutions while being
over $10 \times$ more cost-efficient after factoring in the cost of
construction.
Related papers
- Constrained or Unconstrained? Neural-Network-Based Equation Discovery from Data [0.0]
We represent the PDE as a neural network and use an intermediate state representation similar to a Physics-Informed Neural Network (PINN)
We present a penalty method and a widely used trust-region barrier method to solve this constrained optimization problem.
Our results on the Burgers' and the Korteweg-De Vreis equations demonstrate that the latter constrained method outperforms the penalty method.
arXiv Detail & Related papers (2024-05-30T01:55:44Z) - BO4IO: A Bayesian optimization approach to inverse optimization with uncertainty quantification [5.031974232392534]
This work addresses data-driven inverse optimization (IO)
The goal is to estimate unknown parameters in an optimization model from observed decisions that can be assumed to be optimal or near-optimal.
arXiv Detail & Related papers (2024-05-28T06:52:17Z) - Approximation of Solution Operators for High-dimensional PDEs [2.3076986663832044]
We propose a finite-dimensional control-based method to approximate solution operators for evolutional partial differential equations.
Results are presented for several high-dimensional PDEs, including real-world applications to solving Hamilton-Jacobi-Bellman equations.
arXiv Detail & Related papers (2024-01-18T21:45:09Z) - A Stable and Scalable Method for Solving Initial Value PDEs with Neural
Networks [52.5899851000193]
We develop an ODE based IVP solver which prevents the network from getting ill-conditioned and runs in time linear in the number of parameters.
We show that current methods based on this approach suffer from two key issues.
First, following the ODE produces an uncontrolled growth in the conditioning of the problem, ultimately leading to unacceptably large numerical errors.
arXiv Detail & Related papers (2023-04-28T17:28:18Z) - The ADMM-PINNs Algorithmic Framework for Nonsmooth PDE-Constrained Optimization: A Deep Learning Approach [1.9030954416586594]
We study the combination of the alternating direction method of multipliers (ADMM) with physics-informed neural networks (PINNs)
The resulting ADMM-PINNs algorithmic framework substantially enlarges the applicable range of PINNs to nonsmooth cases of PDE-constrained optimization problems.
We validate the efficiency of the ADMM-PINNs algorithmic framework by different prototype applications.
arXiv Detail & Related papers (2023-02-16T14:17:30Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - Neural Stochastic Dual Dynamic Programming [99.80617899593526]
We introduce a trainable neural model that learns to map problem instances to a piece-wise linear value function.
$nu$-SDDP can significantly reduce problem solving cost without sacrificing solution quality.
arXiv Detail & Related papers (2021-12-01T22:55:23Z) - Physics-Informed Neural Operator for Learning Partial Differential
Equations [55.406540167010014]
PINO is the first hybrid approach incorporating data and PDE constraints at different resolutions to learn the operator.
The resulting PINO model can accurately approximate the ground-truth solution operator for many popular PDE families.
arXiv Detail & Related papers (2021-11-06T03:41:34Z) - Semi-Implicit Neural Solver for Time-dependent Partial Differential
Equations [4.246966726709308]
We propose a neural solver to learn an optimal iterative scheme in a data-driven fashion for any class of PDEs.
We provide theoretical guarantees for the correctness and convergence of neural solvers analogous to conventional iterative solvers.
arXiv Detail & Related papers (2021-09-03T12:03:10Z) - Speeding up Computational Morphogenesis with Online Neural Synthetic
Gradients [51.42959998304931]
A wide range of modern science and engineering applications are formulated as optimization problems with a system of partial differential equations (PDEs) as constraints.
These PDE-constrained optimization problems are typically solved in a standard discretize-then-optimize approach.
We propose a general framework to speed up PDE-constrained optimization using online neural synthetic gradients (ONSG) with a novel two-scale optimization scheme.
arXiv Detail & Related papers (2021-04-25T22:43:51Z) - dNNsolve: an efficient NN-based PDE solver [62.997667081978825]
We introduce dNNsolve, that makes use of dual Neural Networks to solve ODEs/PDEs.
We show that dNNsolve is capable of solving a broad range of ODEs/PDEs in 1, 2 and 3 spacetime dimensions.
arXiv Detail & Related papers (2021-03-15T19:14:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.