A cusp-capturing PINN for elliptic interface problems
- URL: http://arxiv.org/abs/2210.08424v2
- Date: Sun, 16 Apr 2023 14:37:29 GMT
- Title: A cusp-capturing PINN for elliptic interface problems
- Authors: Yu-Hau Tseng, Te-Sheng Lin, Wei-Fan Hu, Ming-Chih Lai
- Abstract summary: We introduce a cusp-enforced level set function as an additional feature input to the network to retain the inherent solution properties.
The proposed neural network has the advantage of being mesh-free, so it can easily handle problems in irregular domains.
We conduct a series of numerical experiments to demonstrate the effectiveness of the cusp-capturing technique and the accuracy of the present network model.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a cusp-capturing physics-informed neural network
(PINN) to solve discontinuous-coefficient elliptic interface problems whose
solution is continuous but has discontinuous first derivatives on the
interface. To find such a solution using neural network representation, we
introduce a cusp-enforced level set function as an additional feature input to
the network to retain the inherent solution properties; that is, capturing the
solution cusps (where the derivatives are discontinuous) sharply. In addition,
the proposed neural network has the advantage of being mesh-free, so it can
easily handle problems in irregular domains. We train the network using the
physics-informed framework in which the loss function comprises the residual of
the differential equation together with certain interface and boundary
conditions. We conduct a series of numerical experiments to demonstrate the
effectiveness of the cusp-capturing technique and the accuracy of the present
network model. Numerical results show that even using a one-hidden-layer
(shallow) network with a moderate number of neurons and sufficient training
data points, the present network model can achieve prediction accuracy
comparable with traditional methods. Besides, if the solution is discontinuous
across the interface, we can simply incorporate an additional supervised
learning task for solution jump approximation into the present network without
much difficulty.
Related papers
- GradINN: Gradient Informed Neural Network [2.287415292857564]
We propose a methodology inspired by Physics Informed Neural Networks (PINNs)
GradINNs leverage prior beliefs about a system's gradient to constrain the predicted function's gradient across all input dimensions.
We demonstrate the advantages of GradINNs, particularly in low-data regimes, on diverse problems spanning non time-dependent systems.
arXiv Detail & Related papers (2024-09-03T14:03:29Z) - Semantic Strengthening of Neuro-Symbolic Learning [85.6195120593625]
Neuro-symbolic approaches typically resort to fuzzy approximations of a probabilistic objective.
We show how to compute this efficiently for tractable circuits.
We test our approach on three tasks: predicting a minimum-cost path in Warcraft, predicting a minimum-cost perfect matching, and solving Sudoku puzzles.
arXiv Detail & Related papers (2023-02-28T00:04:22Z) - Mixed formulation of physics-informed neural networks for
thermo-mechanically coupled systems and heterogeneous domains [0.0]
Physics-informed neural networks (PINNs) are a new tool for solving boundary value problems.
Recent investigations have shown that when designing loss functions for many engineering problems, using first-order derivatives and combining equations from both strong and weak forms can lead to much better accuracy.
In this work, we propose applying the mixed formulation to solve multi-physical problems, specifically a stationary thermo-mechanically coupled system of equations.
arXiv Detail & Related papers (2023-02-09T21:56:59Z) - Adaptive Self-supervision Algorithms for Physics-informed Neural
Networks [59.822151945132525]
Physics-informed neural networks (PINNs) incorporate physical knowledge from the problem domain as a soft constraint on the loss function.
We study the impact of the location of the collocation points on the trainability of these models.
We propose a novel adaptive collocation scheme which progressively allocates more collocation points to areas where the model is making higher errors.
arXiv Detail & Related papers (2022-07-08T18:17:06Z) - Critical Investigation of Failure Modes in Physics-informed Neural
Networks [0.9137554315375919]
We show that a physics-informed neural network with a composite formulation produces highly non- learned loss surfaces that are difficult to optimize.
We also assess the training both approaches on two elliptic problems with increasingly complex target solutions.
arXiv Detail & Related papers (2022-06-20T18:43:35Z) - Improved Training of Physics-Informed Neural Networks with Model
Ensembles [81.38804205212425]
We propose to expand the solution interval gradually to make the PINN converge to the correct solution.
All ensemble members converge to the same solution in the vicinity of observed data.
We show experimentally that the proposed method can improve the accuracy of the found solution.
arXiv Detail & Related papers (2022-04-11T14:05:34Z) - Mean-field Analysis of Piecewise Linear Solutions for Wide ReLU Networks [83.58049517083138]
We consider a two-layer ReLU network trained via gradient descent.
We show that SGD is biased towards a simple solution.
We also provide empirical evidence that knots at locations distinct from the data points might occur.
arXiv Detail & Related papers (2021-11-03T15:14:20Z) - A Discontinuity Capturing Shallow Neural Network for Elliptic Interface
Problems [0.0]
Discontinuity Capturing Shallow Neural Network (DCSNN) for approximating $d$-dimensional piecewise continuous functions and for solving elliptic interface problems is developed.
DCSNN model is comparably efficient due to only moderate number of parameters needed to be trained.
arXiv Detail & Related papers (2021-06-10T08:40:30Z) - Least-Squares ReLU Neural Network (LSNN) Method For Linear
Advection-Reaction Equation [3.6525914200522656]
This paper studies least-squares ReLU neural network method for solving the linear advection-reaction problem with discontinuous solution.
The method is capable of approximating the discontinuous interface of the underlying problem automatically through the free hyper-planes of the ReLU neural network.
arXiv Detail & Related papers (2021-05-25T03:13:15Z) - Conditional physics informed neural networks [85.48030573849712]
We introduce conditional PINNs (physics informed neural networks) for estimating the solution of classes of eigenvalue problems.
We show that a single deep neural network can learn the solution of partial differential equations for an entire class of problems.
arXiv Detail & Related papers (2021-04-06T18:29:14Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.