Learning from Integral Losses in Physics Informed Neural Networks
- URL: http://arxiv.org/abs/2305.17387v2
- Date: Tue, 11 Jun 2024 17:22:28 GMT
- Title: Learning from Integral Losses in Physics Informed Neural Networks
- Authors: Ehsan Saleh, Saba Ghaffari, Timothy Bretl, Luke Olson, Matthew West,
- Abstract summary: This work proposes a solution for the problem of training physics-informed networks under partial integro-differential equations.
We show that naive approximations at replacing these integrals with unbiased estimates lead to biased loss functions and solutions.
Our numerical results confirm the existence of the aforementioned bias in practice and also show that our proposed delayed target approach can lead to accurate solutions with comparable quality to ones estimated with a large sample size integral.
- Score: 7.4308941970763795
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work proposes a solution for the problem of training physics-informed networks under partial integro-differential equations. These equations require an infinite or a large number of neural evaluations to construct a single residual for training. As a result, accurate evaluation may be impractical, and we show that naive approximations at replacing these integrals with unbiased estimates lead to biased loss functions and solutions. To overcome this bias, we investigate three types of potential solutions: the deterministic sampling approaches, the double-sampling trick, and the delayed target method. We consider three classes of PDEs for benchmarking; one defining Poisson problems with singular charges and weak solutions of up to 10 dimensions, another involving weak solutions on electro-magnetic fields and a Maxwell equation, and a third one defining a Smoluchowski coagulation problem. Our numerical results confirm the existence of the aforementioned bias in practice and also show that our proposed delayed target approach can lead to accurate solutions with comparable quality to ones estimated with a large sample size integral. Our implementation is open-source and available at https://github.com/ehsansaleh/btspinn.
Related papers
- Solving partial differential equations with sampled neural networks [1.8590821261905535]
Approximation of solutions to partial differential equations (PDE) is an important problem in computational science and engineering.
We discuss how sampling the hidden weights and biases of the ansatz network from data-agnostic and data-dependent probability distributions allows us to progress on both challenges.
arXiv Detail & Related papers (2024-05-31T14:24:39Z) - Approximation Theory, Computing, and Deep Learning on the Wasserstein Space [0.5735035463793009]
We study the challenge of approximating functions in infinite-dimensional spaces from finite samples.
Our particular focus centers on the Wasserstein distance function, which serves as a relevant example.
In our numerical implementation, we harness appropriately designed neural networks to serve as basis functions.
arXiv Detail & Related papers (2023-10-30T13:59:47Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Good Lattice Training: Physics-Informed Neural Networks Accelerated by
Number Theory [7.462336024223669]
We propose a new technique called good lattice training (GLT) for PINNs.
GLT offers a set of collocation points that are effective even with a small number of points and for multi-dimensional spaces.
Our experiments demonstrate that GLT requires 2--20 times fewer collocation points than uniformly random sampling or Latin hypercube sampling.
arXiv Detail & Related papers (2023-07-26T00:01:21Z) - Physics-Informed Neural Network Method for Parabolic Differential
Equations with Sharply Perturbed Initial Conditions [68.8204255655161]
We develop a physics-informed neural network (PINN) model for parabolic problems with a sharply perturbed initial condition.
Localized large gradients in the ADE solution make the (common in PINN) Latin hypercube sampling of the equation's residual highly inefficient.
We propose criteria for weights in the loss function that produce a more accurate PINN solution than those obtained with the weights selected via other methods.
arXiv Detail & Related papers (2022-08-18T05:00:24Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Conditional physics informed neural networks [85.48030573849712]
We introduce conditional PINNs (physics informed neural networks) for estimating the solution of classes of eigenvalue problems.
We show that a single deep neural network can learn the solution of partial differential equations for an entire class of problems.
arXiv Detail & Related papers (2021-04-06T18:29:14Z) - dNNsolve: an efficient NN-based PDE solver [62.997667081978825]
We introduce dNNsolve, that makes use of dual Neural Networks to solve ODEs/PDEs.
We show that dNNsolve is capable of solving a broad range of ODEs/PDEs in 1, 2 and 3 spacetime dimensions.
arXiv Detail & Related papers (2021-03-15T19:14:41Z) - Meta-Solver for Neural Ordinary Differential Equations [77.8918415523446]
We investigate how the variability in solvers' space can improve neural ODEs performance.
We show that the right choice of solver parameterization can significantly affect neural ODEs models in terms of robustness to adversarial attacks.
arXiv Detail & Related papers (2021-03-15T17:26:34Z) - Computational characteristics of feedforward neural networks for solving
a stiff differential equation [0.0]
We study the solution of a simple but fundamental stiff ordinary differential equation modelling a damped system.
We show that it is possible to identify preferable choices to be made for parameters and methods.
Overall we extend the current literature in the field by showing what can be done in order to obtain reliable and accurate results by the neural network approach.
arXiv Detail & Related papers (2020-12-03T12:22:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.