Hierarchical Learning to Solve Partial Differential Equations Using
Physics-Informed Neural Networks
- URL: http://arxiv.org/abs/2112.01254v1
- Date: Thu, 2 Dec 2021 13:53:42 GMT
- Title: Hierarchical Learning to Solve Partial Differential Equations Using
Physics-Informed Neural Networks
- Authors: Jihun Han and Yoonsang Lee
- Abstract summary: We propose a hierarchical approach to improve the convergence rate and accuracy of the neural network solution to partial differential equations.
We validate the efficiency and robustness of the proposed hierarchical approach through a suite of linear and nonlinear partial differential equations.
- Score: 2.0305676256390934
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Neural network-based approach to solving partial differential equations
has attracted considerable attention due to its simplicity and flexibility to
represent the solution of the partial differential equation. In training a
neural network, the network tends to learn global features corresponding to
low-frequency components while high-frequency components are approximated at a
much slower rate (F-principle). For a class of equations in which the solution
contains a wide range of scales, the network training process can suffer from
slow convergence and low accuracy due to its inability to capture the
high-frequency components. In this work, we propose a hierarchical approach to
improve the convergence rate and accuracy of the neural network solution to
partial differential equations. The proposed method comprises multi-training
levels in which a newly introduced neural network is guided to learn the
residual of the previous level approximation. By the nature of neural networks'
training process, the high-level correction is inclined to capture the
high-frequency components. We validate the efficiency and robustness of the
proposed hierarchical approach through a suite of linear and nonlinear partial
differential equations.
Related papers
- Chebyshev Spectral Neural Networks for Solving Partial Differential Equations [0.0]
The study uses a feedforward neural network model and error backpropagation principles, utilizing automatic differentiation (AD) to compute the loss function.
The numerical efficiency and accuracy of the CSNN model are investigated through testing on elliptic partial differential equations, and it is compared with the well-known Physics-Informed Neural Network(PINN) method.
arXiv Detail & Related papers (2024-06-06T05:31:45Z) - Enhanced physics-informed neural networks with domain scaling and
residual correction methods for multi-frequency elliptic problems [11.707981310045742]
Neural network approximation methods are developed for elliptic partial differential equations with multi-frequency solutions.
The efficiency and accuracy of the proposed methods are demonstrated for multi-frequency model problems.
arXiv Detail & Related papers (2023-11-07T06:08:47Z) - NeuralFastLAS: Fast Logic-Based Learning from Raw Data [54.938128496934695]
Symbolic rule learners generate interpretable solutions, however they require the input to be encoded symbolically.
Neuro-symbolic approaches overcome this issue by mapping raw data to latent symbolic concepts using a neural network.
We introduce NeuralFastLAS, a scalable and fast end-to-end approach that trains a neural network jointly with a symbolic learner.
arXiv Detail & Related papers (2023-10-08T12:33:42Z) - HNS: An Efficient Hermite Neural Solver for Solving Time-Fractional
Partial Differential Equations [12.520882780496738]
We present the high-precision Hermite Neural Solver (HNS) for solving time-fractional partial differential equations.
The experimental results show that HNS has significantly improved accuracy and flexibility compared to existing L1-based methods.
arXiv Detail & Related papers (2023-10-07T12:44:47Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural
Networks on Coupled Ordinary Differential Equations [64.78260098263489]
In this work, we assess the ability of physics-informed neural networks (PINNs) to solve increasingly-complex coupled ordinary differential equations (ODEs)
We show that PINNs eventually fail to produce correct solutions to these benchmarks as their complexity increases.
We identify several reasons why this may be the case, including insufficient network capacity, poor conditioning of the ODEs, and high local curvature, as measured by the Laplacian of the PINN loss.
arXiv Detail & Related papers (2022-10-14T15:01:32Z) - Random Weight Factorization Improves the Training of Continuous Neural
Representations [1.911678487931003]
Continuous neural representations have emerged as a powerful and flexible alternative to classical discretized representations of signals.
We propose random weight factorization as a simple drop-in replacement for parameterizing and initializing conventional linear layers.
We show how this factorization alters the underlying loss landscape and effectively enables each neuron in the network to learn using its own self-adaptive learning rate.
arXiv Detail & Related papers (2022-10-03T23:48:48Z) - Learning Frequency Domain Approximation for Binary Neural Networks [68.79904499480025]
We propose to estimate the gradient of sign function in the Fourier frequency domain using the combination of sine functions for training BNNs.
The experiments on several benchmark datasets and neural architectures illustrate that the binary network learned using our method achieves the state-of-the-art accuracy.
arXiv Detail & Related papers (2021-03-01T08:25:26Z) - Learning Neural Network Subspaces [74.44457651546728]
Recent observations have advanced our understanding of the neural network optimization landscape.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
arXiv Detail & Related papers (2021-02-20T23:26:58Z) - ODEN: A Framework to Solve Ordinary Differential Equations using
Artificial Neural Networks [0.0]
We prove a specific loss function, which does not require knowledge of the exact solution, to evaluate neural networks' performance.
Neural networks are shown to be proficient at approximating continuous solutions within their training domains.
A user-friendly and adaptable open-source code (ODE$mathcalN$) is provided on GitHub.
arXiv Detail & Related papers (2020-05-28T15:34:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.