Good Lattice Training: Physics-Informed Neural Networks Accelerated by
Number Theory
- URL: http://arxiv.org/abs/2307.13869v1
- Date: Wed, 26 Jul 2023 00:01:21 GMT
- Title: Good Lattice Training: Physics-Informed Neural Networks Accelerated by
Number Theory
- Authors: Takashi Matsubara, Takaharu Yaguchi
- Abstract summary: We propose a new technique called good lattice training (GLT) for PINNs.
GLT offers a set of collocation points that are effective even with a small number of points and for multi-dimensional spaces.
Our experiments demonstrate that GLT requires 2--20 times fewer collocation points than uniformly random sampling or Latin hypercube sampling.
- Score: 7.462336024223669
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Physics-informed neural networks (PINNs) offer a novel and efficient approach
to solving partial differential equations (PDEs). Their success lies in the
physics-informed loss, which trains a neural network to satisfy a given PDE at
specific points and to approximate the solution. However, the solutions to PDEs
are inherently infinite-dimensional, and the distance between the output and
the solution is defined by an integral over the domain. Therefore, the
physics-informed loss only provides a finite approximation, and selecting
appropriate collocation points becomes crucial to suppress the discretization
errors, although this aspect has often been overlooked. In this paper, we
propose a new technique called good lattice training (GLT) for PINNs, inspired
by number theoretic methods for numerical analysis. GLT offers a set of
collocation points that are effective even with a small number of points and
for multi-dimensional spaces. Our experiments demonstrate that GLT requires
2--20 times fewer collocation points (resulting in lower computational cost)
than uniformly random sampling or Latin hypercube sampling, while achieving
competitive performance.
Related papers
- Solving partial differential equations with sampled neural networks [1.8590821261905535]
Approximation of solutions to partial differential equations (PDE) is an important problem in computational science and engineering.
We discuss how sampling the hidden weights and biases of the ansatz network from data-agnostic and data-dependent probability distributions allows us to progress on both challenges.
arXiv Detail & Related papers (2024-05-31T14:24:39Z) - Global Convergence of Deep Galerkin and PINNs Methods for Solving
Partial Differential Equations [0.0]
A variety of deep learning methods have been developed to try and solve high-dimensional PDEs by approximating the solution using a neural network.
We prove global convergence for one of the commonly-used deep learning algorithms for solving PDEs, the Deep Galerkin MethodDGM.
arXiv Detail & Related papers (2023-05-10T09:20:11Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Enhanced Physics-Informed Neural Networks with Augmented Lagrangian
Relaxation Method (AL-PINNs) [1.7403133838762446]
Physics-Informed Neural Networks (PINNs) are powerful approximators of solutions to nonlinear partial differential equations (PDEs)
We propose an Augmented Lagrangian relaxation method for PINNs (AL-PINNs)
We demonstrate through various numerical experiments that AL-PINNs yield a much smaller relative error compared with that of state-of-the-art adaptive loss-balancing algorithms.
arXiv Detail & Related papers (2022-04-29T08:33:11Z) - Improved Training of Physics-Informed Neural Networks with Model
Ensembles [81.38804205212425]
We propose to expand the solution interval gradually to make the PINN converge to the correct solution.
All ensemble members converge to the same solution in the vicinity of observed data.
We show experimentally that the proposed method can improve the accuracy of the found solution.
arXiv Detail & Related papers (2022-04-11T14:05:34Z) - Mean-field Analysis of Piecewise Linear Solutions for Wide ReLU Networks [83.58049517083138]
We consider a two-layer ReLU network trained via gradient descent.
We show that SGD is biased towards a simple solution.
We also provide empirical evidence that knots at locations distinct from the data points might occur.
arXiv Detail & Related papers (2021-11-03T15:14:20Z) - Solving PDEs on Unknown Manifolds with Machine Learning [8.220217498103315]
This paper presents a mesh-free computational framework and machine learning theory for solving elliptic PDEs on unknown manifold.
We show that the proposed NN solver can robustly generalize the PDE on new data points with errors that are almost identical to generalizations on new data points.
arXiv Detail & Related papers (2021-06-12T03:55:15Z) - Efficient training of physics-informed neural networks via importance
sampling [2.9005223064604078]
Physics-In Neural Networks (PINNs) are a class of deep neural networks that are trained to compute systems governed by partial differential equations (PDEs)
We show that an importance sampling approach will improve the convergence behavior of PINNs training.
arXiv Detail & Related papers (2021-04-26T02:45:10Z) - dNNsolve: an efficient NN-based PDE solver [62.997667081978825]
We introduce dNNsolve, that makes use of dual Neural Networks to solve ODEs/PDEs.
We show that dNNsolve is capable of solving a broad range of ODEs/PDEs in 1, 2 and 3 spacetime dimensions.
arXiv Detail & Related papers (2021-03-15T19:14:41Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.