Residual-Quantile Adjustment for Adaptive Training of Physics-informed
Neural Network
- URL: http://arxiv.org/abs/2209.05315v1
- Date: Fri, 9 Sep 2022 12:39:38 GMT
- Title: Residual-Quantile Adjustment for Adaptive Training of Physics-informed
Neural Network
- Authors: Jiayue Han, Zhiqiang Cai, Zhiyou Wu, Xiang Zhou
- Abstract summary: In this paper, we show that the bottleneck in the adaptive choice of samples for training efficiency is the behavior of the tail distribution of the numerical residual.
We propose the Residual-Quantile Adjustment (RQA) method for a better weight choice for each training sample.
Experiment results show that the proposed method can outperform several adaptive methods on various partial differential equation (PDE) problems.
- Score: 2.5769426017309915
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adaptive training methods for physical-informed neural network (PINN) require
dedicated constructions of the distribution of weights assigned at each
training sample. To efficiently seek such an optimal weight distribution is not
a simple task and most existing methods choose the adaptive weights based on
approximating the full distribution or the maximum of residuals. In this paper,
we show that the bottleneck in the adaptive choice of samples for training
efficiency is the behavior of the tail distribution of the numerical residual.
Thus, we propose the Residual-Quantile Adjustment (RQA) method for a better
weight choice for each training sample. After initially setting the weights
proportional to the $p$-th power of the residual, our RQA method reassign all
weights above $q$-quantile ($90\%$ for example) to the median value, so that
the weight follows a quantile-adjusted distribution derived from the residuals.
With the iterative reweighting technique, RQA is also very easy to implement.
Experiment results show that the proposed method can outperform several
adaptive methods on various partial differential equation (PDE) problems.
Related papers
- Efficient Backpropagation with Variance-Controlled Adaptive Sampling [32.297478086982466]
Sampling-based algorithms, which eliminate ''unimportant'' computations during forward and/or back propagation (BP), offer potential solutions to accelerate neural network training.
We introduce a variance-controlled adaptive sampling (VCAS) method designed to accelerate BP.
VCAS can preserve the original training loss trajectory and validation accuracy with an up to 73.87% FLOPs reduction of BP and 49.58% FLOPs reduction of the whole training process.
arXiv Detail & Related papers (2024-02-27T05:40:36Z) - Adversarial Adaptive Sampling: Unify PINN and Optimal Transport for the Approximation of PDEs [2.526490864645154]
We propose a new minmax formulation to optimize simultaneously the approximate solution, given by a neural network model, and the random samples in the training set.
The key idea is to use a deep generative model to adjust random samples in the training set such that the residual induced by the approximate PDE solution can maintain a smooth profile.
arXiv Detail & Related papers (2023-05-30T02:59:18Z) - Adaptive Distribution Calibration for Few-Shot Learning with
Hierarchical Optimal Transport [78.9167477093745]
We propose a novel distribution calibration method by learning the adaptive weight matrix between novel samples and base classes.
Experimental results on standard benchmarks demonstrate that our proposed plug-and-play model outperforms competing approaches.
arXiv Detail & Related papers (2022-10-09T02:32:57Z) - Learning to Re-weight Examples with Optimal Transport for Imbalanced
Classification [74.62203971625173]
Imbalanced data pose challenges for deep learning based classification models.
One of the most widely-used approaches for tackling imbalanced data is re-weighting.
We propose a novel re-weighting method based on optimal transport (OT) from a distributional point of view.
arXiv Detail & Related papers (2022-08-05T01:23:54Z) - DAS: A deep adaptive sampling method for solving partial differential
equations [2.934397685379054]
We propose a deep adaptive sampling (DAS) method for solving partial differential equations (PDEs)
Deep neural networks are utilized to approximate the solutions of PDEs and deep generative models are employed to generate new collocation points that refine the training set.
We present a theoretical analysis to show that the proposed DAS method can reduce the error bound and demonstrate its effectiveness with numerical experiments.
arXiv Detail & Related papers (2021-12-28T08:37:47Z) - Unrolling Particles: Unsupervised Learning of Sampling Distributions [102.72972137287728]
Particle filtering is used to compute good nonlinear estimates of complex systems.
We show in simulations that the resulting particle filter yields good estimates in a wide range of scenarios.
arXiv Detail & Related papers (2021-10-06T16:58:34Z) - Efficient training of physics-informed neural networks via importance
sampling [2.9005223064604078]
Physics-In Neural Networks (PINNs) are a class of deep neural networks that are trained to compute systems governed by partial differential equations (PDEs)
We show that an importance sampling approach will improve the convergence behavior of PINNs training.
arXiv Detail & Related papers (2021-04-26T02:45:10Z) - Sampling-free Variational Inference for Neural Networks with
Multiplicative Activation Noise [51.080620762639434]
We propose a more efficient parameterization of the posterior approximation for sampling-free variational inference.
Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks.
arXiv Detail & Related papers (2021-03-15T16:16:18Z) - Bandit Samplers for Training Graph Neural Networks [63.17765191700203]
Several sampling algorithms with variance reduction have been proposed for accelerating the training of Graph Convolution Networks (GCNs)
These sampling algorithms are not applicable to more general graph neural networks (GNNs) where the message aggregator contains learned weights rather than fixed weights, such as Graph Attention Networks (GAT)
arXiv Detail & Related papers (2020-06-10T12:48:37Z) - Robust Sampling in Deep Learning [62.997667081978825]
Deep learning requires regularization mechanisms to reduce overfitting and improve generalization.
We address this problem by a new regularization method based on distributional robust optimization.
During the training, the selection of samples is done according to their accuracy in such a way that the worst performed samples are the ones that contribute the most in the optimization.
arXiv Detail & Related papers (2020-06-04T09:46:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.