A hybrid neural-network and finite-difference method for solving Poisson
equation with jump discontinuities on interfaces
- URL: http://arxiv.org/abs/2210.05523v1
- Date: Tue, 11 Oct 2022 15:15:09 GMT
- Title: A hybrid neural-network and finite-difference method for solving Poisson
equation with jump discontinuities on interfaces
- Authors: Wei-Fan Hu and Te-Sheng Lin and Yu-Hau Tseng and Ming-Chih Lai
- Abstract summary: A new hybrid neural-network and finite-difference method is developed for solving Poisson equation in a regular domain with jump discontinuities on an embedded irregular interface.
The two- and three-dimensional numerical results show that the present hybrid method preserves second-order accuracy for the solution and its derivatives.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, a new hybrid neural-network and finite-difference method is
developed for solving Poisson equation in a regular domain with jump
discontinuities on an embedded irregular interface. Since the solution has low
regularity across the interface, when applying finite difference discretization
to this problem, an additional treatment accounting for the jump
discontinuities must be employed at grid points near the interface. Here, we
aim to elevate such an extra effort to ease our implementation. The key idea is
to decompose the solution into two parts: singular (non-smooth) and regular
(smooth) parts. The neural network learning machinery incorporating given jump
conditions finds the singular solution, while the standard finite difference
method is used to obtain the regular solution with associated boundary
conditions. Regardless of the interface geometry, these two tasks only require
a supervised learning task of function approximation and a fast direct solver
of the Poisson equation, making the hybrid method easy to implement and
efficient. The two- and three-dimensional numerical results show that the
present hybrid method preserves second-order accuracy for the solution and its
derivatives, and it is comparable with the traditional immersed interface
method in the literature.
Related papers
- Constrained Optimization via Exact Augmented Lagrangian and Randomized
Iterative Sketching [55.28394191394675]
We develop an adaptive inexact Newton method for equality-constrained nonlinear, nonIBS optimization problems.
We demonstrate the superior performance of our method on benchmark nonlinear problems, constrained logistic regression with data from LVM, and a PDE-constrained problem.
arXiv Detail & Related papers (2023-05-28T06:33:37Z) - Dirichlet-Neumann learning algorithm for solving elliptic interface
problems [7.935690079593201]
Dirichlet-Neumann learning algorithm is proposed in this work to solve the benchmark elliptic interface problem with high-contrast coefficients as well as irregular interfaces.
We carry out a rigorous error analysis to evaluate the discrepancy caused by the boundary penalty treatment for each subproblem.
The effectiveness and robustness of our proposed methods are demonstrated experimentally through a series of elliptic interface problems.
arXiv Detail & Related papers (2023-01-18T08:10:49Z) - Stochastic Inexact Augmented Lagrangian Method for Nonconvex Expectation
Constrained Optimization [88.0031283949404]
Many real-world problems have complicated non functional constraints and use a large number of data points.
Our proposed method outperforms an existing method with the previously best-known result.
arXiv Detail & Related papers (2022-12-19T14:48:54Z) - $r-$Adaptive Deep Learning Method for Solving Partial Differential
Equations [0.685316573653194]
We introduce an $r-$adaptive algorithm to solve Partial Differential Equations using a Deep Neural Network.
The proposed method restricts to tensor product meshes and optimize the boundary node locations in one dimension, from which we build two- or three-dimensional meshes.
arXiv Detail & Related papers (2022-10-19T21:38:46Z) - A cusp-capturing PINN for elliptic interface problems [0.0]
We introduce a cusp-enforced level set function as an additional feature input to the network to retain the inherent solution properties.
The proposed neural network has the advantage of being mesh-free, so it can easily handle problems in irregular domains.
We conduct a series of numerical experiments to demonstrate the effectiveness of the cusp-capturing technique and the accuracy of the present network model.
arXiv Detail & Related papers (2022-10-16T03:05:18Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - An application of the splitting-up method for the computation of a
neural network representation for the solution for the filtering equations [68.8204255655161]
Filtering equations play a central role in many real-life applications, including numerical weather prediction, finance and engineering.
One of the classical approaches to approximate the solution of the filtering equations is to use a PDE inspired method, called the splitting-up method.
We combine this method with a neural network representation to produce an approximation of the unnormalised conditional distribution of the signal process.
arXiv Detail & Related papers (2022-01-10T11:01:36Z) - A Discontinuity Capturing Shallow Neural Network for Elliptic Interface
Problems [0.0]
Discontinuity Capturing Shallow Neural Network (DCSNN) for approximating $d$-dimensional piecewise continuous functions and for solving elliptic interface problems is developed.
DCSNN model is comparably efficient due to only moderate number of parameters needed to be trained.
arXiv Detail & Related papers (2021-06-10T08:40:30Z) - Least-Squares ReLU Neural Network (LSNN) Method For Linear
Advection-Reaction Equation [3.6525914200522656]
This paper studies least-squares ReLU neural network method for solving the linear advection-reaction problem with discontinuous solution.
The method is capable of approximating the discontinuous interface of the underlying problem automatically through the free hyper-planes of the ReLU neural network.
arXiv Detail & Related papers (2021-05-25T03:13:15Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.