PDE-constrained Models with Neural Network Terms: Optimization and
Global Convergence
- URL: http://arxiv.org/abs/2105.08633v6
- Date: Mon, 16 Oct 2023 01:51:13 GMT
- Title: PDE-constrained Models with Neural Network Terms: Optimization and
Global Convergence
- Authors: Justin Sirignano, Jonathan MacArt, Konstantinos Spiliopoulos
- Abstract summary: Recent research has used deep learning to develop partial differential equation (PDE) models in science and engineering.
We rigorously study the optimization of a class of linear elliptic PDEs with neural network terms.
We train a neural network model for an application in fluid mechanics, in which the neural network functions as a closure model for the Reynolds-averaged Navier-Stokes equations.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent research has used deep learning to develop partial differential
equation (PDE) models in science and engineering. The functional form of the
PDE is determined by a neural network, and the neural network parameters are
calibrated to available data. Calibration of the embedded neural network can be
performed by optimizing over the PDE. Motivated by these applications, we
rigorously study the optimization of a class of linear elliptic PDEs with
neural network terms. The neural network parameters in the PDE are optimized
using gradient descent, where the gradient is evaluated using an adjoint PDE.
As the number of parameters become large, the PDE and adjoint PDE converge to a
non-local PDE system. Using this limit PDE system, we are able to prove
convergence of the neural network-PDE to a global minimum during the
optimization. Finally, we use this adjoint method to train a neural network
model for an application in fluid mechanics, in which the neural network
functions as a closure model for the Reynolds-averaged Navier--Stokes (RANS)
equations. The RANS neural network model is trained on several datasets for
turbulent channel flow and is evaluated out-of-sample at different Reynolds
numbers.
Related papers
- End-to-End Mesh Optimization of a Hybrid Deep Learning Black-Box PDE Solver [24.437884270729903]
Recent research proposed a PDE correction framework that leverages deep learning to correct the solution obtained by a PDE solver on a coarse mesh.
End-to-end training of such a PDE correction model requires the PDE solver to support automatic differentiation through the iterative numerical process.
In this study, we explore the feasibility of end-to-end training of a hybrid model with a black-box PDE solver and a deep learning model for fluid flow prediction.
arXiv Detail & Related papers (2024-04-17T21:49:45Z) - Deep Equilibrium Based Neural Operators for Steady-State PDEs [100.88355782126098]
We study the benefits of weight-tied neural network architectures for steady-state PDEs.
We propose FNO-DEQ, a deep equilibrium variant of the FNO architecture that directly solves for the solution of a steady-state PDE.
arXiv Detail & Related papers (2023-11-30T22:34:57Z) - Reduced-order modeling for parameterized PDEs via implicit neural
representations [4.135710717238787]
We present a new data-driven reduced-order modeling approach to efficiently solve parametrized partial differential equations (PDEs)
The proposed framework encodes PDE and utilizes a parametrized neural ODE (PNODE) to learn latent dynamics characterized by multiple PDE parameters.
We evaluate the proposed method at a large Reynolds number and obtain up to speedup of O(103) and 1% relative error to the ground truth values.
arXiv Detail & Related papers (2023-11-28T01:35:06Z) - LatentPINNs: Generative physics-informed neural networks via a latent
representation learning [0.0]
We introduce latentPINN, a framework that utilizes latent representations of the PDE parameters as additional (to the coordinates) inputs into PINNs.
We use a two-stage training scheme in which the first stage, we learn the latent representations for the distribution of PDE parameters.
In the second stage, we train a physics-informed neural network over inputs given by randomly drawn samples from the coordinate space within the solution domain.
arXiv Detail & Related papers (2023-05-11T16:54:17Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - Lie Point Symmetry Data Augmentation for Neural PDE Solvers [69.72427135610106]
We present a method, which can partially alleviate this problem, by improving neural PDE solver sample complexity.
In the context of PDEs, it turns out that we are able to quantitatively derive an exhaustive list of data transformations.
We show how it can easily be deployed to improve neural PDE solver sample complexity by an order of magnitude.
arXiv Detail & Related papers (2022-02-15T18:43:17Z) - NeuralPDE: Modelling Dynamical Systems from Data [0.44259821861543996]
We propose NeuralPDE, a model which combines convolutional neural networks (CNNs) with differentiable ODE solvers to model dynamical systems.
We show that the Method of Lines used in standard PDE solvers can be represented using convolutions which makes CNNs the natural choice to parametrize arbitrary PDE dynamics.
Our model can be applied to any data without requiring any prior knowledge about the governing PDE.
arXiv Detail & Related papers (2021-11-15T10:59:52Z) - dNNsolve: an efficient NN-based PDE solver [62.997667081978825]
We introduce dNNsolve, that makes use of dual Neural Networks to solve ODEs/PDEs.
We show that dNNsolve is capable of solving a broad range of ODEs/PDEs in 1, 2 and 3 spacetime dimensions.
arXiv Detail & Related papers (2021-03-15T19:14:41Z) - Neural-PDE: A RNN based neural network for solving time dependent PDEs [6.560798708375526]
Partial differential equations (PDEs) play a crucial role in studying a vast number of problems in science and engineering.
We propose a sequence deep learning framework called Neural-PDE, which allows to automatically learn governing rules of any time-dependent PDE system.
In our experiments the Neural-PDE can efficiently extract the dynamics within 20 epochs training, and produces accurate predictions.
arXiv Detail & Related papers (2020-09-08T15:46:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.