Efficient physics-informed neural networks using hash encoding
- URL: http://arxiv.org/abs/2302.13397v1
- Date: Sun, 26 Feb 2023 20:00:23 GMT
- Title: Efficient physics-informed neural networks using hash encoding
- Authors: Xinquan Huang and Tariq Alkhalifah
- Abstract summary: Physics-informed neural networks (PINNs) have attracted a lot of attention in scientific computing.
We propose to incorporate multi-resolution hash encoding into PINNs to improve the training efficiency.
We test the proposed method on three problems, including Burgers equation, Helmholtz equation, and Navier-Stokes equation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Physics-informed neural networks (PINNs) have attracted a lot of attention in
scientific computing as their functional representation of partial differential
equation (PDE) solutions offers flexibility and accuracy features. However,
their training cost has limited their practical use as a real alternative to
classic numerical methods. Thus, we propose to incorporate multi-resolution
hash encoding into PINNs to improve the training efficiency, as such encoding
offers a locally-aware (at multi resolution) coordinate inputs to the neural
network. Borrowed from the neural representation field community (NeRF), we
investigate the robustness of calculating the derivatives of such hash encoded
neural networks with respect to the input coordinates, which is often needed by
the PINN loss terms. We propose to replace the automatic differentiation with
finite-difference calculations of the derivatives to address the discontinuous
nature of such derivatives. We also share the appropriate ranges for the hash
encoding hyperparameters to obtain robust derivatives. We test the proposed
method on three problems, including Burgers equation, Helmholtz equation, and
Navier-Stokes equation. The proposed method admits about a 10-fold improvement
in efficiency over the vanilla PINN implementation.
Related papers
- PMNN:Physical Model-driven Neural Network for solving time-fractional
differential equations [17.66402435033991]
An innovative Physical Model-driven Neural Network (PMNN) method is proposed to solve time-fractional differential equations.
It effectively combines deep neural networks (DNNs) with approximation of fractional derivatives.
arXiv Detail & Related papers (2023-10-07T12:43:32Z) - RBF-MGN:Solving spatiotemporal PDEs with Physics-informed Graph Neural
Network [4.425915683879297]
We propose a novel framework based on graph neural networks (GNNs) and radial basis function finite difference (RBF-FD)
RBF-FD is used to construct a high-precision difference format of the differential equations to guide model training.
We illustrate the generalizability, accuracy, and efficiency of the proposed algorithms on different PDE parameters.
arXiv Detail & Related papers (2022-12-06T10:08:02Z) - Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural
Networks on Coupled Ordinary Differential Equations [64.78260098263489]
In this work, we assess the ability of physics-informed neural networks (PINNs) to solve increasingly-complex coupled ordinary differential equations (ODEs)
We show that PINNs eventually fail to produce correct solutions to these benchmarks as their complexity increases.
We identify several reasons why this may be the case, including insufficient network capacity, poor conditioning of the ODEs, and high local curvature, as measured by the Laplacian of the PINN loss.
arXiv Detail & Related papers (2022-10-14T15:01:32Z) - DEQGAN: Learning the Loss Function for PINNs with Generative Adversarial
Networks [1.0499611180329804]
This work presents Differential Equation GAN (DEQGAN), a novel method for solving differential equations using generative adversarial networks.
We show that DEQGAN achieves multiple orders of magnitude lower mean squared errors than PINNs.
We also show that DEQGAN achieves solution accuracies that are competitive with popular numerical methods.
arXiv Detail & Related papers (2022-09-15T06:39:47Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Learning to Solve the AC-OPF using Sensitivity-Informed Deep Neural
Networks [52.32646357164739]
We propose a deep neural network (DNN) to solve the solutions of the optimal power flow (ACOPF)
The proposed SIDNN is compatible with a broad range of OPF schemes.
It can be seamlessly integrated in other learning-to-OPF schemes.
arXiv Detail & Related papers (2021-03-27T00:45:23Z) - Unsupervised Learning of Solutions to Differential Equations with
Generative Adversarial Networks [1.1470070927586016]
We develop a novel method for solving differential equations with unsupervised neural networks.
We show that our method, which we call Differential Equation GAN (DEQGAN), can obtain multiple orders of magnitude lower mean squared errors.
arXiv Detail & Related papers (2020-07-21T23:36:36Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - A Derivative-Free Method for Solving Elliptic Partial Differential
Equations with Deep Neural Networks [2.578242050187029]
We introduce a deep neural network based method for solving a class of elliptic partial differential equations.
We approximate the solution of the PDE with a deep neural network which is trained under the guidance of a probabilistic representation of the PDE.
As Brownian walkers explore the domain, the deep neural network is iteratively trained using a form of reinforcement learning.
arXiv Detail & Related papers (2020-01-17T03:29:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.