Exact imposition of boundary conditions with distance functions in
physics-informed deep neural networks
- URL: http://arxiv.org/abs/2104.08426v2
- Date: Sun, 7 Nov 2021 07:02:47 GMT
- Title: Exact imposition of boundary conditions with distance functions in
physics-informed deep neural networks
- Authors: N. Sukumar, Ankit Srivastava
- Abstract summary: We introduce geometry-aware trial functions in artifical neural networks to improve the training in deep learning for partial differential equations.
To exactly impose homogeneous Dirichlet boundary conditions, the trial function is taken as $phi$ multiplied by the PINN approximation.
We present numerical solutions for linear and nonlinear boundary-value problems over domains with affine and curved boundaries.
- Score: 0.5804039129951741
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we introduce a new approach based on distance fields to
exactly impose boundary conditions in physics-informed deep neural networks.
The challenges in satisfying Dirichlet boundary conditions in meshfree and
particle methods are well-known. This issue is also pertinent in the
development of physics informed neural networks (PINN) for the solution of
partial differential equations. We introduce geometry-aware trial functions in
artifical neural networks to improve the training in deep learning for partial
differential equations. To this end, we use concepts from constructive solid
geometry (R-functions) and generalized barycentric coordinates (mean value
potential fields) to construct $\phi$, an approximate distance function to the
boundary of a domain. To exactly impose homogeneous Dirichlet boundary
conditions, the trial function is taken as $\phi$ multiplied by the PINN
approximation, and its generalization via transfinite interpolation is used to
a priori satisfy inhomogeneous Dirichlet (essential), Neumann (natural), and
Robin boundary conditions on complex geometries. In doing so, we eliminate
modeling error associated with the satisfaction of boundary conditions in a
collocation method and ensure that kinematic admissibility is met pointwise in
a Ritz method. We present numerical solutions for linear and nonlinear
boundary-value problems over domains with affine and curved boundaries.
Benchmark problems in 1D for linear elasticity, advection-diffusion, and beam
bending; and in 2D for the Poisson equation, biharmonic equation, and the
nonlinear Eikonal equation are considered. The approach extends to higher
dimensions, and we showcase its use by solving a Poisson problem with
homogeneous Dirichlet boundary conditions over the 4D hypercube. This study
provides a pathway for meshfree analysis to be conducted on the exact geometry
without domain discretization.
Related papers
- Learning the boundary-to-domain mapping using Lifting Product Fourier Neural Operators for partial differential equations [5.5927988408828755]
We present a novel FNO-based architecture, named Lifting Product FNO (or LP-FNO) which can map arbitrary boundary functions to a solution in the entire domain.
We demonstrate the efficacy and resolution independence of the proposed LP-FNO for the 2D Poisson equation.
arXiv Detail & Related papers (2024-06-24T15:45:37Z) - A Hybrid Kernel-Free Boundary Integral Method with Operator Learning for Solving Parametric Partial Differential Equations In Complex Domains [0.0]
Kernel-Free Boundary Integral (KFBI) method presents an iterative solution to boundary integral equations arising from elliptic partial differential equations (PDEs)
We propose a hybrid KFBI method, integrating the foundational principles of the KFBI method with the capabilities of deep learning.
arXiv Detail & Related papers (2024-04-23T17:25:35Z) - A Mean-Field Analysis of Neural Stochastic Gradient Descent-Ascent for Functional Minimax Optimization [90.87444114491116]
This paper studies minimax optimization problems defined over infinite-dimensional function classes of overparametricized two-layer neural networks.
We address (i) the convergence of the gradient descent-ascent algorithm and (ii) the representation learning of the neural networks.
Results show that the feature representation induced by the neural networks is allowed to deviate from the initial one by the magnitude of $O(alpha-1)$, measured in terms of the Wasserstein distance.
arXiv Detail & Related papers (2024-04-18T16:46:08Z) - BINN: A deep learning approach for computational mechanics problems
based on boundary integral equations [4.397337158619076]
We proposed the boundary-integral type neural networks (BINN) for the boundary value problems in computational mechanics.
The boundary integral equations are employed to transfer all the unknowns to the boundary, then the unknowns are approximated using neural networks and solved through a training process.
arXiv Detail & Related papers (2023-01-11T14:10:23Z) - Neural PDE Solvers for Irregular Domains [25.673617202478606]
We present a framework to neurally solve partial differential equations over domains with irregularly shaped geometric boundaries.
Our network takes in the shape of the domain as an input and is able to generalize to novel (unseen) irregular domains.
arXiv Detail & Related papers (2022-11-07T00:00:30Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Deep neural network approximation for high-dimensional elliptic PDEs
with boundary conditions [6.079011829257036]
Deep neural networks are capable of approximating solutions to a class of parabolic partial differential equations without incurring the curse of dimension.
The present paper considers an important such model problem, namely the Poisson equation on a domain $Dsubset mathbbRd$ subject to Dirichlet boundary conditions.
It is shown that deep neural networks are capable of representing solutions of that problem without incurring the curse of dimension.
arXiv Detail & Related papers (2020-07-10T13:40:27Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Neural Operator: Graph Kernel Network for Partial Differential Equations [57.90284928158383]
This work is to generalize neural networks so that they can learn mappings between infinite-dimensional spaces (operators)
We formulate approximation of the infinite-dimensional mapping by composing nonlinear activation functions and a class of integral operators.
Experiments confirm that the proposed graph kernel network does have the desired properties and show competitive performance compared to the state of the art solvers.
arXiv Detail & Related papers (2020-03-07T01:56:20Z) - Convex Geometry and Duality of Over-parameterized Neural Networks [70.15611146583068]
We develop a convex analytic approach to analyze finite width two-layer ReLU networks.
We show that an optimal solution to the regularized training problem can be characterized as extreme points of a convex set.
In higher dimensions, we show that the training problem can be cast as a finite dimensional convex problem with infinitely many constraints.
arXiv Detail & Related papers (2020-02-25T23:05:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.