RANG: A Residual-based Adaptive Node Generation Method for
Physics-Informed Neural Networks
- URL: http://arxiv.org/abs/2205.01051v2
- Date: Thu, 5 May 2022 01:21:00 GMT
- Title: RANG: A Residual-based Adaptive Node Generation Method for
Physics-Informed Neural Networks
- Authors: Wei Peng, Weien Zhou, Xiaoya Zhang, Wen Yao, Zheliang Liu
- Abstract summary: Learning solutions of partial differential equations with Physics-Informed Neural Networks (PINNs) is an attractive alternative approach to traditional solvers.
Despite the success of PINNs in accurately solving a wide variety of PDEs, the method still requires improvements in terms of computational efficiency.
We propose the Residual-based Adaptive Node Generation (RANG) approach for efficient training of PINNs.
- Score: 4.642273921499256
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning solutions of partial differential equations (PDEs) with
Physics-Informed Neural Networks (PINNs) is an attractive alternative approach
to traditional solvers due to its flexibility and ease of incorporating
observed data. Despite the success of PINNs in accurately solving a wide
variety of PDEs, the method still requires improvements in terms of
computational efficiency. One possible improvement idea is to optimize the
generation of training point sets. Residual-based adaptive sampling and
quasi-uniform sampling approaches have been each applied to improve the
training effects of PINNs, respectively. To benefit from both methods, we
propose the Residual-based Adaptive Node Generation (RANG) approach for
efficient training of PINNs, which is based on a variable density nodal
distribution method for RBF-FD. The method is also enhanced by a memory
mechanism to further improve training stability. We conduct experiments on
three linear PDEs and three nonlinear PDEs with various node generation
methods, through which the accuracy and efficiency of the proposed method
compared to the predominant uniform sampling approach is verified numerically.
Related papers
- Multi-Dimensional Visual Data Recovery: Scale-Aware Tensor Modeling and Accelerated Randomized Computation [51.65236537605077]
We propose a new type of network compression optimization technique, fully randomized tensor network compression (FCTN)<n>FCTN has significant advantages in correlation characterization and transpositional in algebra, and has notable achievements in multi-dimensional data processing and analysis.<n>We derive efficient algorithms with guarantees to solve the formulated models.
arXiv Detail & Related papers (2026-02-13T14:56:37Z) - Self-adaptive weighting and sampling for physics-informed neural networks [0.5302833929459496]
We introduce a hybrid adaptive sampling and weighting method to enhance the performance of physics-informed neural networks (PINNs)<n>The proposed framework consistently improves prediction accuracy and training efficiency, offering a more robust approach for solving PDEs with PINNs.
arXiv Detail & Related papers (2025-11-07T17:48:11Z) - Neural Optimal Transport Meets Multivariate Conformal Prediction [58.43397908730771]
We propose a framework for conditional vectorile regression (CVQR)<n>CVQR combines neural optimal transport with quantized optimization, and apply it to predictions.
arXiv Detail & Related papers (2025-09-29T19:50:19Z) - Provably accurate adaptive sampling for collocation points in physics-informed neural networks [11.912466054588327]
Physics-informed Neural Networks (PINN) have emerged as an efficient way to learn surrogate solvers.
We introduce a provably accurate sampling method for collocation points based on the Hessian of the PDE residuals.
arXiv Detail & Related papers (2025-04-01T15:45:08Z) - An Adaptive Collocation Point Strategy For Physics Informed Neural Networks via the QR Discrete Empirical Interpolation Method [1.2289361708127877]
We propose an adaptive collocation point selection strategy utilizing the QR Discrete Empirical Interpolation Method (QR-DEIM)
Our results on benchmark PDEs, including the wave, Allen-Cahn, and Burgers' equations, demonstrate that our QR-DEIM-based approach improves PINN accuracy compared to existing methods.
arXiv Detail & Related papers (2025-01-13T21:24:15Z) - Adaptive Training of Grid-Dependent Physics-Informed Kolmogorov-Arnold Networks [4.216184112447278]
Physics-Informed Neural Networks (PINNs) have emerged as a robust framework for solving Partial Differential Equations (PDEs)
We present a fast JAX-based implementation of grid-dependent Physics-Informed Kolmogorov-Arnold Networks (PIKANs) for solving PDEs.
We demonstrate that the adaptive features significantly enhance solution accuracy, decreasing the L2 error relative to the reference solution by up to 43.02%.
arXiv Detail & Related papers (2024-07-24T19:55:08Z) - Dynamical Measure Transport and Neural PDE Solvers for Sampling [77.38204731939273]
We tackle the task of sampling from a probability density as transporting a tractable density function to the target.
We employ physics-informed neural networks (PINNs) to approximate the respective partial differential equations (PDEs) solutions.
PINNs allow for simulation- and discretization-free optimization and can be trained very efficiently.
arXiv Detail & Related papers (2024-07-10T17:39:50Z) - RoPINN: Region Optimized Physics-Informed Neural Networks [66.38369833561039]
Physics-informed neural networks (PINNs) have been widely applied to solve partial differential equations (PDEs)
This paper proposes and theoretically studies a new training paradigm as region optimization.
A practical training algorithm, Region Optimized PINN (RoPINN), is seamlessly derived from this new paradigm.
arXiv Detail & Related papers (2024-05-23T09:45:57Z) - An Efficient Deep Learning Approach for Approximating Parameter-to-Solution Maps of PDEs [12.227294893496342]
We propose an efficient approach combining reduced collocation methods (RCMs) and deep neural networks (DNNs)<n>In the approximation analysis section, we rigorously derive sharp upper bounds on the complexity of the neural networks.<n>The POD-DNN has demonstrated significantly accelerated computation speeds compared with conventional numerical methods.
arXiv Detail & Related papers (2024-04-10T08:52:12Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Solving Partial Differential Equations with Point Source Based on
Physics-Informed Neural Networks [33.18757454787517]
In recent years, deep learning technology has been used to solve partial differential equations (PDEs)
We propose a universal solution to tackle this problem with three novel techniques.
We evaluate the proposed method with three representative PDEs, and the experimental results show that our method outperforms existing deep learning-based methods with respect to the accuracy, the efficiency and the versatility.
arXiv Detail & Related papers (2021-11-02T06:39:54Z) - Sparse Bayesian Deep Learning for Dynamic System Identification [14.040914364617418]
This paper proposes a sparse Bayesian treatment of deep neural networks (DNNs) for system identification.
The proposed Bayesian approach offers a principled way to alleviate the challenges by marginal likelihood/model evidence approximation.
The effectiveness of the proposed Bayesian approach is demonstrated on several linear and nonlinear systems identification benchmarks.
arXiv Detail & Related papers (2021-07-27T16:09:48Z) - Efficient training of physics-informed neural networks via importance
sampling [2.9005223064604078]
Physics-In Neural Networks (PINNs) are a class of deep neural networks that are trained to compute systems governed by partial differential equations (PDEs)
We show that an importance sampling approach will improve the convergence behavior of PINNs training.
arXiv Detail & Related papers (2021-04-26T02:45:10Z) - Hybrid FEM-NN models: Combining artificial neural networks with the
finite element method [0.0]
We present a methodology combining neural networks with physical principle constraints in the form of partial differential equations (PDEs)
The approach allows to train neural networks while respecting the PDEs as a strong constraint in the optimisation as apposed to making them part of the loss function.
We demonstrate the method on a complex cardiac cell model problem using deep neural networks.
arXiv Detail & Related papers (2021-01-04T13:36:06Z) - Adaptive Gradient Method with Resilience and Momentum [120.83046824742455]
We propose an Adaptive Gradient Method with Resilience and Momentum (AdaRem)
AdaRem adjusts the parameter-wise learning rate according to whether the direction of one parameter changes in the past is aligned with the direction of the current gradient.
Our method outperforms previous adaptive learning rate-based algorithms in terms of the training speed and the test error.
arXiv Detail & Related papers (2020-10-21T14:49:00Z) - Interpolation Technique to Speed Up Gradients Propagation in Neural ODEs [71.26657499537366]
We propose a simple literature-based method for the efficient approximation of gradients in neural ODE models.
We compare it with the reverse dynamic method to train neural ODEs on classification, density estimation, and inference approximation tasks.
arXiv Detail & Related papers (2020-03-11T13:15:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.