A comprehensive study of non-adaptive and residual-based adaptive
sampling for physics-informed neural networks
- URL: http://arxiv.org/abs/2207.10289v1
- Date: Thu, 21 Jul 2022 03:57:27 GMT
- Title: A comprehensive study of non-adaptive and residual-based adaptive
sampling for physics-informed neural networks
- Authors: Chenxi Wu, Min Zhu, Qinyang Tan, Yadhu Kartha, Lu Lu
- Abstract summary: Physics-informed neural networks (PINNs) have shown to be an effective tool for solving forward and inverse problems of partial differential equations (PDEs)
PINNs embed the PDEs into the loss of the neural network, and this PDE loss is evaluated at a set of scattered residual points.
In the existing studies on PINNs, only a few simple residual point sampling methods have mainly been used.
- Score: 3.0975832075350165
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Physics-informed neural networks (PINNs) have shown to be an effective tool
for solving forward and inverse problems of partial differential equations
(PDEs). PINNs embed the PDEs into the loss of the neural network, and this PDE
loss is evaluated at a set of scattered residual points. The distribution of
these points are highly important to the performance of PINNs. However, in the
existing studies on PINNs, only a few simple residual point sampling methods
have mainly been used. Here, we present a comprehensive study of two categories
of sampling: non-adaptive uniform sampling and adaptive nonuniform sampling. We
consider six uniform sampling, including (1) equispaced uniform grid, (2)
uniformly random sampling, (3) Latin hypercube sampling, (4) Halton sequence,
(5) Hammersley sequence, and (6) Sobol sequence. We also consider a resampling
strategy for uniform sampling. To improve the sampling efficiency and the
accuracy of PINNs, we propose two new residual-based adaptive sampling methods:
residual-based adaptive distribution (RAD) and residual-based adaptive
refinement with distribution (RAR-D), which dynamically improve the
distribution of residual points based on the PDE residuals during training.
Hence, we have considered a total of 10 different sampling methods, including
six non-adaptive uniform sampling, uniform sampling with resampling, two
proposed adaptive sampling, and an existing adaptive sampling. We extensively
tested the performance of these sampling methods for four forward problems and
two inverse problems in many setups. Our numerical results presented in this
study are summarized from more than 6000 simulations of PINNs. We show that the
proposed adaptive sampling methods of RAD and RAR-D significantly improve the
accuracy of PINNs with fewer residual points. The results obtained in this
study can also be used as a practical guideline in choosing sampling methods.
Related papers
- Optimal Budgeted Rejection Sampling for Generative Models [54.050498411883495]
Rejection sampling methods have been proposed to improve the performance of discriminator-based generative models.
We first propose an Optimal Budgeted Rejection Sampling scheme that is provably optimal.
Second, we propose an end-to-end method that incorporates the sampling scheme into the training procedure to further enhance the model's overall performance.
arXiv Detail & Related papers (2023-11-01T11:52:41Z) - Adversarial Adaptive Sampling: Unify PINN and Optimal Transport for the Approximation of PDEs [2.526490864645154]
We propose a new minmax formulation to optimize simultaneously the approximate solution, given by a neural network model, and the random samples in the training set.
The key idea is to use a deep generative model to adjust random samples in the training set such that the residual induced by the approximate PDE solution can maintain a smooth profile.
arXiv Detail & Related papers (2023-05-30T02:59:18Z) - Unsupervised Learning of Sampling Distributions for Particle Filters [80.6716888175925]
We put forward four methods for learning sampling distributions from observed measurements.
Experiments demonstrate that learned sampling distributions exhibit better performance than designed, minimum-degeneracy sampling distributions.
arXiv Detail & Related papers (2023-02-02T15:50:21Z) - A Novel Adaptive Causal Sampling Method for Physics-Informed Neural
Networks [35.25394937917774]
Informed Neural Networks (PINNs) have become a kind of attractive machine learning method for obtaining solutions of partial differential equations (PDEs)
We introduce temporal causality into adaptive sampling and propose a novel adaptive causal sampling method to improve the performance and efficiency of PINs.
We demonstrate that by utilizing such a relatively simple sampling method, prediction performance can be improved up to two orders of magnitude compared with state-of-the-art results.
arXiv Detail & Related papers (2022-10-24T01:51:08Z) - PCB-RandNet: Rethinking Random Sampling for LIDAR Semantic Segmentation
in Autonomous Driving Scene [15.516687293651795]
We propose a new Polar Cylinder Balanced Random Sampling method for semantic segmentation of large-scale LiDAR point clouds.
In addition, a sampling consistency loss is introduced to further improve the segmentation performance and reduce the model's variance under different sampling methods.
Our approach produces excellent performance on both SemanticKITTI and SemanticPOSS benchmarks, achieving a 2.8% and 4.0% improvement, respectively.
arXiv Detail & Related papers (2022-09-28T02:59:36Z) - Calibrate and Debias Layer-wise Sampling for Graph Convolutional
Networks [39.56471534442315]
This paper revisits the approach from a matrix approximation perspective.
We propose a new principle for constructing sampling probabilities and an efficient debiasing algorithm.
Improvements are demonstrated by extensive analyses of estimation variance and experiments on common benchmarks.
arXiv Detail & Related papers (2022-06-01T15:52:06Z) - DAS: A deep adaptive sampling method for solving partial differential
equations [2.934397685379054]
We propose a deep adaptive sampling (DAS) method for solving partial differential equations (PDEs)
Deep neural networks are utilized to approximate the solutions of PDEs and deep generative models are employed to generate new collocation points that refine the training set.
We present a theoretical analysis to show that the proposed DAS method can reduce the error bound and demonstrate its effectiveness with numerical experiments.
arXiv Detail & Related papers (2021-12-28T08:37:47Z) - Unrolling Particles: Unsupervised Learning of Sampling Distributions [102.72972137287728]
Particle filtering is used to compute good nonlinear estimates of complex systems.
We show in simulations that the resulting particle filter yields good estimates in a wide range of scenarios.
arXiv Detail & Related papers (2021-10-06T16:58:34Z) - Sampling-free Variational Inference for Neural Networks with
Multiplicative Activation Noise [51.080620762639434]
We propose a more efficient parameterization of the posterior approximation for sampling-free variational inference.
Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks.
arXiv Detail & Related papers (2021-03-15T16:16:18Z) - Bandit Samplers for Training Graph Neural Networks [63.17765191700203]
Several sampling algorithms with variance reduction have been proposed for accelerating the training of Graph Convolution Networks (GCNs)
These sampling algorithms are not applicable to more general graph neural networks (GNNs) where the message aggregator contains learned weights rather than fixed weights, such as Graph Attention Networks (GAT)
arXiv Detail & Related papers (2020-06-10T12:48:37Z) - Spatially Adaptive Inference with Stochastic Feature Sampling and
Interpolation [72.40827239394565]
We propose to compute features only at sparsely sampled locations.
We then densely reconstruct the feature map with an efficient procedure.
The presented network is experimentally shown to save substantial computation while maintaining accuracy over a variety of computer vision tasks.
arXiv Detail & Related papers (2020-03-19T15:36:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.