A Shallow Ritz Method for elliptic problems with Singular Sources
- URL: http://arxiv.org/abs/2107.12013v1
- Date: Mon, 26 Jul 2021 08:07:19 GMT
- Title: A Shallow Ritz Method for elliptic problems with Singular Sources
- Authors: Ming-Chih Lai, Che-Chia Chang, Wei-Syuan Lin, Wei-Fan Hu, Te-Sheng Lin
- Abstract summary: We develop a shallow Ritz-type neural network for solving elliptic problems with delta function singular sources on an interface.
We include the level set function of the interface as a feature input and find that it significantly improves the training efficiency and accuracy.
- Score: 0.1574941677049471
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, a shallow Ritz-type neural network for solving elliptic
problems with delta function singular sources on an interface is developed.
There are three novel features in the present work; namely, (i) the delta
function singularity is naturally removed, (ii) level set function is
introduced as a feather input, (iii) it is completely shallow consisting of
only one hidden layer. We first introduce the energy functional of the problem
and then transform the contribution of singular sources to a regular surface
integral along the interface. In such a way the delta function singularity can
be naturally removed without the introduction of discrete delta function that
is commonly used in traditional regularization methods such as the well-known
immersed boundary method. The original problem is then reformulated as a
minimization problem. We propose a shallow Ritz-type neural network with one
hidden layer to approximate the global minimizer of the energy functional. As a
result, the network is trained by minimizing the loss function that is a
discrete version of the energy. In addition, we include the level set function
of the interface as a feature input and find that it significantly improves the
training efficiency and accuracy. We perform a series of numerical tests to
demonstrate the accuracy of the present network as well as its capability for
problems in irregular domains and in higher dimensions.
Related papers
- Your Network May Need to Be Rewritten: Network Adversarial Based on High-Dimensional Function Graph Decomposition [0.994853090657971]
We propose a network adversarial method to address the aforementioned challenges.
This is the first method to use different activation functions in a network.
We have achieved a substantial improvement over standard activation functions regarding both training efficiency and predictive accuracy.
arXiv Detail & Related papers (2024-05-04T11:22:30Z) - On the Identification and Optimization of Nonsmooth Superposition
Operators in Semilinear Elliptic PDEs [3.045851438458641]
We study an infinite-dimensional optimization problem that aims to identify the Nemytskii operator in the nonlinear part of a prototypical semilinear elliptic partial differential equation (PDE)
In contrast to previous works, we consider this identification problem in a low-regularity regime in which the function inducing the Nemytskii operator is a-priori only known to be an element of $H leakyloc(mathbbR)$.
arXiv Detail & Related papers (2023-06-08T13:33:20Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - A cusp-capturing PINN for elliptic interface problems [0.0]
We introduce a cusp-enforced level set function as an additional feature input to the network to retain the inherent solution properties.
The proposed neural network has the advantage of being mesh-free, so it can easily handle problems in irregular domains.
We conduct a series of numerical experiments to demonstrate the effectiveness of the cusp-capturing technique and the accuracy of the present network model.
arXiv Detail & Related papers (2022-10-16T03:05:18Z) - NeuralEF: Deconstructing Kernels by Deep Neural Networks [47.54733625351363]
Traditional nonparametric solutions based on the Nystr"om formula suffer from scalability issues.
Recent work has resorted to a parametric approach, i.e., training neural networks to approximate the eigenfunctions.
We show that these problems can be fixed by using a new series of objective functions that generalizes to space of supervised and unsupervised learning problems.
arXiv Detail & Related papers (2022-04-30T05:31:07Z) - Graph-adaptive Rectified Linear Unit for Graph Neural Networks [64.92221119723048]
Graph Neural Networks (GNNs) have achieved remarkable success by extending traditional convolution to learning on non-Euclidean data.
We propose Graph-adaptive Rectified Linear Unit (GReLU) which is a new parametric activation function incorporating the neighborhood information in a novel and efficient way.
We conduct comprehensive experiments to show that our plug-and-play GReLU method is efficient and effective given different GNN backbones and various downstream tasks.
arXiv Detail & Related papers (2022-02-13T10:54:59Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - A Discontinuity Capturing Shallow Neural Network for Elliptic Interface
Problems [0.0]
Discontinuity Capturing Shallow Neural Network (DCSNN) for approximating $d$-dimensional piecewise continuous functions and for solving elliptic interface problems is developed.
DCSNN model is comparably efficient due to only moderate number of parameters needed to be trained.
arXiv Detail & Related papers (2021-06-10T08:40:30Z) - Fourier Neural Operator for Parametric Partial Differential Equations [57.90284928158383]
We formulate a new neural operator by parameterizing the integral kernel directly in Fourier space.
We perform experiments on Burgers' equation, Darcy flow, and Navier-Stokes equation.
It is up to three orders of magnitude faster compared to traditional PDE solvers.
arXiv Detail & Related papers (2020-10-18T00:34:21Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.