A Discontinuity Capturing Shallow Neural Network for Elliptic Interface
Problems
- URL: http://arxiv.org/abs/2106.05587v1
- Date: Thu, 10 Jun 2021 08:40:30 GMT
- Title: A Discontinuity Capturing Shallow Neural Network for Elliptic Interface
Problems
- Authors: Wei-Fan Hu and Te-Sheng Lin and Ming-Chih Lai
- Abstract summary: Discontinuity Capturing Shallow Neural Network (DCSNN) for approximating $d$-dimensional piecewise continuous functions and for solving elliptic interface problems is developed.
DCSNN model is comparably efficient due to only moderate number of parameters needed to be trained.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, a new Discontinuity Capturing Shallow Neural Network (DCSNN)
for approximating $d$-dimensional piecewise continuous functions and for
solving elliptic interface problems is developed. There are three novel
features in the present network; namely, (i) jump discontinuity is captured
sharply, (ii) it is completely shallow consisting of only one hidden layer,
(iii) it is completely mesh-free for solving partial differential equations
(PDEs). We first continuously extend the $d$-dimensional piecewise continuous
function in $(d+1)$-dimensional space by augmenting one coordinate variable to
label the pieces of discontinuous function, and then construct a shallow neural
network to express this new augmented function. Since only one hidden layer is
employed, the number of training parameters (weights and biases) scales
linearly with the dimension and the neurons used in the hidden layer. For
solving elliptic interface equations, the network is trained by minimizing the
mean squared error loss that consists of the residual of governing equation,
boundary condition, and the interface jump conditions. We perform a series of
numerical tests to compare the accuracy and efficiency of the present network.
Our DCSNN model is comparably efficient due to only moderate number of
parameters needed to be trained (a few hundreds of parameters used throughout
all numerical examples here), and the result shows better accuracy (and less
parameters) than other method using piecewise deep neural network in
literature. We also compare the results obtained by the traditional grid-based
immersed interface method (IIM) which is designed particularly for elliptic
interface problems. Again, the present results show better accuracy than the
ones obtained by IIM. We conclude by solving a six-dimensional problem to show
the capability of the present network for high-dimensional applications.
Related papers
- A Nonoverlapping Domain Decomposition Method for Extreme Learning Machines: Elliptic Problems [0.0]
Extreme learning machine (ELM) is a methodology for solving partial differential equations (PDEs) using a single hidden layer feed-forward neural network.
In this paper, we propose a nonoverlapping domain decomposition method (DDM) for ELMs that not only reduces the training time of ELMs, but is also suitable for parallel computation.
arXiv Detail & Related papers (2024-06-22T23:25:54Z) - Adaptive Multilevel Neural Networks for Parametric PDEs with Error Estimation [0.0]
A neural network architecture is presented to solve high-dimensional parameter-dependent partial differential equations (pPDEs)
It is constructed to map parameters of the model data to corresponding finite element solutions.
It outputs a coarse grid solution and a series of corrections as produced in an adaptive finite element method (AFEM)
arXiv Detail & Related papers (2024-03-19T11:34:40Z) - The Implicit Bias of Minima Stability in Multivariate Shallow ReLU
Networks [53.95175206863992]
We study the type of solutions to which gradient descent converges when used to train a single hidden-layer multivariate ReLU network with the quadratic loss.
We prove that although shallow ReLU networks are universal approximators, stable shallow networks are not.
arXiv Detail & Related papers (2023-06-30T09:17:39Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - A cusp-capturing PINN for elliptic interface problems [0.0]
We introduce a cusp-enforced level set function as an additional feature input to the network to retain the inherent solution properties.
The proposed neural network has the advantage of being mesh-free, so it can easily handle problems in irregular domains.
We conduct a series of numerical experiments to demonstrate the effectiveness of the cusp-capturing technique and the accuracy of the present network model.
arXiv Detail & Related papers (2022-10-16T03:05:18Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - A Shallow Ritz Method for elliptic problems with Singular Sources [0.1574941677049471]
We develop a shallow Ritz-type neural network for solving elliptic problems with delta function singular sources on an interface.
We include the level set function of the interface as a feature input and find that it significantly improves the training efficiency and accuracy.
arXiv Detail & Related papers (2021-07-26T08:07:19Z) - Least-Squares ReLU Neural Network (LSNN) Method For Linear
Advection-Reaction Equation [3.6525914200522656]
This paper studies least-squares ReLU neural network method for solving the linear advection-reaction problem with discontinuous solution.
The method is capable of approximating the discontinuous interface of the underlying problem automatically through the free hyper-planes of the ReLU neural network.
arXiv Detail & Related papers (2021-05-25T03:13:15Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.