Adaptive Self-supervision Algorithms for Physics-informed Neural
Networks
- URL: http://arxiv.org/abs/2207.04084v1
- Date: Fri, 8 Jul 2022 18:17:06 GMT
- Title: Adaptive Self-supervision Algorithms for Physics-informed Neural
Networks
- Authors: Shashank Subramanian, Robert M. Kirby, Michael W. Mahoney, Amir
Gholami
- Abstract summary: Physics-informed neural networks (PINNs) incorporate physical knowledge from the problem domain as a soft constraint on the loss function.
We study the impact of the location of the collocation points on the trainability of these models.
We propose a novel adaptive collocation scheme which progressively allocates more collocation points to areas where the model is making higher errors.
- Score: 59.822151945132525
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Physics-informed neural networks (PINNs) incorporate physical knowledge from
the problem domain as a soft constraint on the loss function, but recent work
has shown that this can lead to optimization difficulties. Here, we study the
impact of the location of the collocation points on the trainability of these
models. We find that the vanilla PINN performance can be significantly boosted
by adapting the location of the collocation points as training proceeds.
Specifically, we propose a novel adaptive collocation scheme which
progressively allocates more collocation points (without increasing their
number) to areas where the model is making higher errors (based on the gradient
of the loss function in the domain). This, coupled with a judicious restarting
of the training during any optimization stalls (by simply resampling the
collocation points in order to adjust the loss landscape) leads to better
estimates for the prediction error. We present results for several problems,
including a 2D Poisson and diffusion-advection system with different forcing
functions. We find that training vanilla PINNs for these problems can result in
up to 70% prediction error in the solution, especially in the regime of low
collocation points. In contrast, our adaptive schemes can achieve up to an
order of magnitude smaller error, with similar computational complexity as the
baseline. Furthermore, we find that the adaptive methods consistently perform
on-par or slightly better than vanilla PINN method, even for large collocation
point regimes. The code for all the experiments has been open sourced.
Related papers
- Deep Loss Convexification for Learning Iterative Models [11.36644967267829]
Iterative methods such as iterative closest point (ICP) for point cloud registration often suffer from bad local optimality.
We propose learning to form a convex landscape around each ground truth.
arXiv Detail & Related papers (2024-11-16T01:13:04Z) - Just How Flexible are Neural Networks in Practice? [89.80474583606242]
It is widely believed that a neural network can fit a training set containing at least as many samples as it has parameters.
In practice, however, we only find solutions via our training procedure, including the gradient and regularizers, limiting flexibility.
arXiv Detail & Related papers (2024-06-17T12:24:45Z) - SGD method for entropy error function with smoothing l0 regularization for neural networks [3.108634881604788]
entropy error function has been widely used in neural networks.
We propose a novel entropy function with smoothing l0 regularization for feed-forward neural networks.
Our work is novel as it enables neural networks to learn effectively, producing more accurate predictions.
arXiv Detail & Related papers (2024-05-28T19:54:26Z) - Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection [58.789823426981044]
We propose a novel auxiliary loss formulation that aims to align the class confidence of bounding boxes with the accurateness of predictions.
Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios.
arXiv Detail & Related papers (2023-03-25T08:56:21Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Fixed-budget online adaptive mesh learning for physics-informed neural
networks. Towards parameterized problem inference [0.0]
We propose a Fixed-Budget Online Adaptive Mesh Learning (FBOAML) method for training collocation points based on local maxima and local minima of the PDEs residuals.
FBOAML is able to identify the high-gradient location and even give better prediction for some physical fields than the classical PINNs.
arXiv Detail & Related papers (2022-12-22T15:12:29Z) - Scaling Forward Gradient With Local Losses [117.22685584919756]
Forward learning is a biologically plausible alternative to backprop for learning deep neural networks.
We show that it is possible to substantially reduce the variance of the forward gradient by applying perturbations to activations rather than weights.
Our approach matches backprop on MNIST and CIFAR-10 and significantly outperforms previously proposed backprop-free algorithms on ImageNet.
arXiv Detail & Related papers (2022-10-07T03:52:27Z) - Failure-informed adaptive sampling for PINNs [5.723850818203907]
Physics-informed neural networks (PINNs) have emerged as an effective technique for solving PDEs in a wide range of domains.
Recent research has demonstrated, however, that the performance of PINNs can vary dramatically with different sampling procedures.
We present an adaptive approach termed failure-informed PINNs, which is inspired by the viewpoint of reliability analysis.
arXiv Detail & Related papers (2022-10-01T13:34:41Z) - Non-Gradient Manifold Neural Network [79.44066256794187]
Deep neural network (DNN) generally takes thousands of iterations to optimize via gradient descent.
We propose a novel manifold neural network based on non-gradient optimization.
arXiv Detail & Related papers (2021-06-15T06:39:13Z) - Bayesian Nested Neural Networks for Uncertainty Calibration and Adaptive
Compression [40.35734017517066]
Nested networks or slimmable networks are neural networks whose architectures can be adjusted instantly during testing time.
Recent studies have focused on a "nested dropout" layer, which is able to order the nodes of a layer by importance during training.
arXiv Detail & Related papers (2021-01-27T12:34:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.