Train Once and Use Forever: Solving Boundary Value Problems in Unseen
Domains with Pre-trained Deep Learning Models
- URL: http://arxiv.org/abs/2104.10873v1
- Date: Thu, 22 Apr 2021 05:20:27 GMT
- Title: Train Once and Use Forever: Solving Boundary Value Problems in Unseen
Domains with Pre-trained Deep Learning Models
- Authors: Hengjie Wang, Robert Planas, Aparna Chandramowlishwaran, Ramin
Bostanabad
- Abstract summary: This paper introduces a transferable framework for solving boundary value problems (BVPs) via deep neural networks.
First, we introduce emphgenomic flow network (GFNet), a neural network that can infer the solution of a BVP across arbitrary boundary conditions.
Then, we propose emphmosaic flow (MF) predictor, a novel iterative algorithm that assembles or stitches the GFNet's inferences.
- Score: 0.20999222360659606
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Physics-informed neural networks (PINNs) are increasingly employed to
replace/augment traditional numerical methods in solving partial differential
equations (PDEs). While having many attractive features, state-of-the-art PINNs
surrogate a specific realization of a PDE system and hence are
problem-specific. That is, each time the boundary conditions and domain shape
change, the model needs to be re-trained. This limitation prohibits the
application of PINNs in realistic or large-scale engineering problems
especially since the costs and efforts associated with their training are
considerable.
This paper introduces a transferable framework for solving boundary value
problems (BVPs) via deep neural networks which can be trained once and used
forever for various domains of unseen sizes, shapes, and boundary conditions.
First, we introduce \emph{genomic flow network} (GFNet), a neural network that
can infer the solution of a BVP across arbitrary boundary conditions on a small
square domain called \emph{genome}. Then, we propose \emph{mosaic flow} (MF)
predictor, a novel iterative algorithm that assembles or stitches the GFNet's
inferences to obtain the solution of BVPs on unseen, large domains while
preserving the spatial regularity of the solution. We demonstrate that our
framework can estimate the solution of Laplace and Navier-Stokes equations in
domains of unseen shapes and boundary conditions that are, respectively, $1200$
and $12$ times larger than the domains where training is performed. Since our
framework eliminates the need to re-train, it demonstrates up to 3 orders of
magnitude speedups compared to the state-of-the-art.
Related papers
- Non-overlapping, Schwarz-type Domain Decomposition Method for Physics and Equality Constrained Artificial Neural Networks [0.24578723416255746]
We present a non-overlapping, Schwarz-type domain decomposition method with a generalized interface condition.
Our approach employs physics and equality-constrained artificial neural networks (PECANN) within each subdomain.
A distinct advantage our domain decomposition method is its ability to learn solutions to both Poisson's and Helmholtz equations.
arXiv Detail & Related papers (2024-09-20T16:48:55Z) - Extremization to Fine Tune Physics Informed Neural Networks for Solving Boundary Value Problems [0.1874930567916036]
Theory of Functional Connections (TFC) is used to exactly impose initial and boundary conditions (IBCs) of (I)BVPs on PINNs.
We propose a modification to the TFC framework named Reduced TFC and show a significant improvement in the training and inference time of PINNs.
arXiv Detail & Related papers (2024-06-07T23:25:13Z) - Physics-Informed Boundary Integral Networks (PIBI-Nets): A Data-Driven Approach for Solving Partial Differential Equations [1.6435014180036467]
Partial differential equations (PDEs) are widely used to describe relevant phenomena in dynamical systems.
In high-dimensional settings, PINNs often suffer from computational problems because they require dense collocation points over the entire computational domain.
We present Physics-Informed Boundary Networks (PIBI-Nets) as a data-driven approach for solving PDEs in one dimension less than the original problem space.
arXiv Detail & Related papers (2023-08-18T14:03:34Z) - Neural Fields with Hard Constraints of Arbitrary Differential Order [61.49418682745144]
We develop a series of approaches for enforcing hard constraints on neural fields.
The constraints can be specified as a linear operator applied to the neural field and its derivatives.
Our approaches are demonstrated in a wide range of real-world applications.
arXiv Detail & Related papers (2023-06-15T08:33:52Z) - A Stable and Scalable Method for Solving Initial Value PDEs with Neural
Networks [52.5899851000193]
We develop an ODE based IVP solver which prevents the network from getting ill-conditioned and runs in time linear in the number of parameters.
We show that current methods based on this approach suffer from two key issues.
First, following the ODE produces an uncontrolled growth in the conditioning of the problem, ultimately leading to unacceptably large numerical errors.
arXiv Detail & Related papers (2023-04-28T17:28:18Z) - BINN: A deep learning approach for computational mechanics problems
based on boundary integral equations [4.397337158619076]
We proposed the boundary-integral type neural networks (BINN) for the boundary value problems in computational mechanics.
The boundary integral equations are employed to transfer all the unknowns to the boundary, then the unknowns are approximated using neural networks and solved through a training process.
arXiv Detail & Related papers (2023-01-11T14:10:23Z) - Improved Training of Physics-Informed Neural Networks with Model
Ensembles [81.38804205212425]
We propose to expand the solution interval gradually to make the PINN converge to the correct solution.
All ensemble members converge to the same solution in the vicinity of observed data.
We show experimentally that the proposed method can improve the accuracy of the found solution.
arXiv Detail & Related papers (2022-04-11T14:05:34Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Physics and Equality Constrained Artificial Neural Networks: Application
to Partial Differential Equations [1.370633147306388]
Physics-informed neural networks (PINNs) have been proposed to learn the solution of partial differential equations (PDE)
Here, we show that this specific way of formulating the objective function is the source of severe limitations in the PINN approach.
We propose a versatile framework that can tackle both inverse and forward problems.
arXiv Detail & Related papers (2021-09-30T05:55:35Z) - dNNsolve: an efficient NN-based PDE solver [62.997667081978825]
We introduce dNNsolve, that makes use of dual Neural Networks to solve ODEs/PDEs.
We show that dNNsolve is capable of solving a broad range of ODEs/PDEs in 1, 2 and 3 spacetime dimensions.
arXiv Detail & Related papers (2021-03-15T19:14:41Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.