Non-overlapping, Schwarz-type Domain Decomposition Method for Physics and Equality Constrained Artificial Neural Networks
- URL: http://arxiv.org/abs/2409.13644v1
- Date: Fri, 20 Sep 2024 16:48:55 GMT
- Title: Non-overlapping, Schwarz-type Domain Decomposition Method for Physics and Equality Constrained Artificial Neural Networks
- Authors: Qifeng Hu, Shamsulhaq Basir, Inanc Senocak,
- Abstract summary: We introduce a non-overlapping, Schwarz-type domain decomposition method employing a generalized interface condition.
Our method utilizes physics and equality constrained artificial neural networks (PECANN) in each subdomain.
We demonstrate the generalization ability and robust parallel performance of our method across a range of forward and inverse problems.
- Score: 0.24578723416255746
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a non-overlapping, Schwarz-type domain decomposition method employing a generalized interface condition, tailored for physics-informed machine learning of partial differential equations (PDEs) in both forward and inverse scenarios. Our method utilizes physics and equality constrained artificial neural networks (PECANN) in each subdomain. Diverging from the original PECANN method, which uses initial and boundary conditions to constrain the PDEs alone, our method jointly employs both the boundary conditions and PDEs to constrain a specially formulated generalized interface loss function for each subdomain. This modification enhances the learning of subdomain-specific interface parameters, while delaying information exchange between neighboring subdomains, and thereby significantly reduces communication overhead. By utilizing an augmented Lagrangian method with a conditionally adaptive update strategy, the constrained optimization problem in each subdomain is transformed into a dual unconstrained problem. This approach enables neural network training without the need for ad-hoc tuning of model parameters. We demonstrate the generalization ability and robust parallel performance of our method across a range of forward and inverse problems, with solid parallel scaling performance up to 32 processes using the Message Passing Interface model. A key strength of our approach is its capability to solve both Laplace's and Helmholtz equations with multi-scale solutions within a unified framework, highlighting its broad applicability and efficiency.
Related papers
- WANCO: Weak Adversarial Networks for Constrained Optimization problems [5.257895611010853]
We first transform minimax problems into minimax problems using the augmented Lagrangian method.
We then use two (or several) deep neural networks to represent the primal and dual variables respectively.
The parameters in the neural networks are then trained by an adversarial process.
arXiv Detail & Related papers (2024-07-04T05:37:48Z) - A Generalized Schwarz-type Non-overlapping Domain Decomposition Method
using Physics-constrained Neural Networks [0.9137554315375919]
We present a meshless Schwarz-type non-overlapping domain decomposition based on artificial neural networks.
Our method is applicable to both the Laplace's and Helmholtz equations.
arXiv Detail & Related papers (2023-07-23T21:18:04Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - An adaptive augmented Lagrangian method for training physics and
equality constrained artificial neural networks [0.9137554315375919]
We apply our PECANN framework to solve forward and inverse problems that have an expanded and diverse set of constraints.
We show that ALM with its conventional formulation to update its penalty parameter and Lagrange multiplier stalls for such challenging problems.
We propose an adaptive ALM in which each constraint is assigned a unique penalty parameter that evolve adaptively according to a rule inspired by the adaptive subgradient method.
arXiv Detail & Related papers (2023-06-08T03:16:21Z) - A Stable and Scalable Method for Solving Initial Value PDEs with Neural
Networks [52.5899851000193]
We develop an ODE based IVP solver which prevents the network from getting ill-conditioned and runs in time linear in the number of parameters.
We show that current methods based on this approach suffer from two key issues.
First, following the ODE produces an uncontrolled growth in the conditioning of the problem, ultimately leading to unacceptably large numerical errors.
arXiv Detail & Related papers (2023-04-28T17:28:18Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Physics and Equality Constrained Artificial Neural Networks: Application
to Partial Differential Equations [1.370633147306388]
Physics-informed neural networks (PINNs) have been proposed to learn the solution of partial differential equations (PDE)
Here, we show that this specific way of formulating the objective function is the source of severe limitations in the PINN approach.
We propose a versatile framework that can tackle both inverse and forward problems.
arXiv Detail & Related papers (2021-09-30T05:55:35Z) - Train Once and Use Forever: Solving Boundary Value Problems in Unseen
Domains with Pre-trained Deep Learning Models [0.20999222360659606]
This paper introduces a transferable framework for solving boundary value problems (BVPs) via deep neural networks.
First, we introduce emphgenomic flow network (GFNet), a neural network that can infer the solution of a BVP across arbitrary boundary conditions.
Then, we propose emphmosaic flow (MF) predictor, a novel iterative algorithm that assembles or stitches the GFNet's inferences.
arXiv Detail & Related papers (2021-04-22T05:20:27Z) - Adaptive Subcarrier, Parameter, and Power Allocation for Partitioned
Edge Learning Over Broadband Channels [69.18343801164741]
partitioned edge learning (PARTEL) implements parameter-server training, a well known distributed learning method, in wireless network.
We consider the case of deep neural network (DNN) models which can be trained using PARTEL by introducing some auxiliary variables.
arXiv Detail & Related papers (2020-10-08T15:27:50Z) - GACEM: Generalized Autoregressive Cross Entropy Method for Multi-Modal
Black Box Constraint Satisfaction [69.94831587339539]
We present a modified Cross-Entropy Method (CEM) that uses a masked auto-regressive neural network for modeling uniform distributions over the solution space.
Our algorithm is able to express complicated solution spaces, thus allowing it to track a variety of different solution regions.
arXiv Detail & Related papers (2020-02-17T20:21:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.