Failure-informed adaptive sampling for PINNs, Part II: combining with
re-sampling and subset simulation
- URL: http://arxiv.org/abs/2302.01529v1
- Date: Fri, 3 Feb 2023 04:05:06 GMT
- Title: Failure-informed adaptive sampling for PINNs, Part II: combining with
re-sampling and subset simulation
- Authors: Zhiwei Gao, Tao Tang, Liang Yan, Tao Zhou
- Abstract summary: In this work, we present two novel extensions to FI-PINNs.
The first extension consist in combining with a re-sampling technique, so that the new algorithm can maintain a constant training size.
The second extension is to present the subset simulation algorithm as the posterior model for estimating the error indicator.
- Score: 6.9476677480730125
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This is the second part of our series works on failure-informed adaptive
sampling for physic-informed neural networks (FI-PINNs). In our previous work
\cite{gao2022failure}, we have presented an adaptive sampling framework by
using the failure probability as the posterior error indicator, where the
truncated Gaussian model has been adopted for estimating the indicator. In this
work, we present two novel extensions to FI-PINNs. The first extension consist
in combining with a re-sampling technique, so that the new algorithm can
maintain a constant training size. This is achieved through a cosine-annealing,
which gradually transforms the sampling of collocation points from uniform to
adaptive via training progress. The second extension is to present the subset
simulation algorithm as the posterior model (instead of the truncated Gaussian
model) for estimating the error indicator, which can more effectively estimate
the failure probability and generate new effective training points in the
failure region. We investigate the performance of the new approach using
several challenging problems, and numerical experiments demonstrate a
significant improvement over the original algorithm.
Related papers
- A variational neural Bayes framework for inference on intractable posterior distributions [1.0801976288811024]
Posterior distributions of model parameters are efficiently obtained by feeding observed data into a trained neural network.
We show theoretically that our posteriors converge to the true posteriors in Kullback-Leibler divergence.
arXiv Detail & Related papers (2024-04-16T20:40:15Z) - Neural Network-Based Score Estimation in Diffusion Models: Optimization
and Generalization [12.812942188697326]
Diffusion models have emerged as a powerful tool rivaling GANs in generating high-quality samples with improved fidelity, flexibility, and robustness.
A key component of these models is to learn the score function through score matching.
Despite empirical success on various tasks, it remains unclear whether gradient-based algorithms can learn the score function with a provable accuracy.
arXiv Detail & Related papers (2024-01-28T08:13:56Z) - Adaptive Sampling for Deep Learning via Efficient Nonparametric Proxies [35.29595714883275]
We develop an efficient sketch-based approximation to the Nadaraya-Watson estimator.
Our sampling algorithm outperforms the baseline in terms of wall-clock time and accuracy on four datasets.
arXiv Detail & Related papers (2023-11-22T18:40:18Z) - Domain Adaptive Synapse Detection with Weak Point Annotations [63.97144211520869]
We present AdaSyn, a framework for domain adaptive synapse detection with weak point annotations.
In the WASPSYN challenge at I SBI 2023, our method ranks the 1st place.
arXiv Detail & Related papers (2023-08-31T05:05:53Z) - Collapsed Inference for Bayesian Deep Learning [36.1725075097107]
We introduce a novel collapsed inference scheme that performs Bayesian model averaging using collapsed samples.
A collapsed sample represents uncountably many models drawn from the approximate posterior.
Our proposed use of collapsed samples achieves a balance between scalability and accuracy.
arXiv Detail & Related papers (2023-06-16T08:34:42Z) - Solving Inverse Problems with Score-Based Generative Priors learned from
Noisy Data [1.7969777786551424]
SURE-Score is an approach for learning score-based generative models using training samples corrupted by additive Gaussian noise.
We demonstrate the generality of SURE-Score by learning priors and applying posterior sampling to ill-posed inverse problems in two practical applications.
arXiv Detail & Related papers (2023-05-02T02:51:01Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z) - Adaptive Self-supervision Algorithms for Physics-informed Neural
Networks [59.822151945132525]
Physics-informed neural networks (PINNs) incorporate physical knowledge from the problem domain as a soft constraint on the loss function.
We study the impact of the location of the collocation points on the trainability of these models.
We propose a novel adaptive collocation scheme which progressively allocates more collocation points to areas where the model is making higher errors.
arXiv Detail & Related papers (2022-07-08T18:17:06Z) - A Bayesian Perspective on Training Speed and Model Selection [51.15664724311443]
We show that a measure of a model's training speed can be used to estimate its marginal likelihood.
We verify our results in model selection tasks for linear models and for the infinite-width limit of deep neural networks.
Our results suggest a promising new direction towards explaining why neural networks trained with gradient descent are biased towards functions that generalize well.
arXiv Detail & Related papers (2020-10-27T17:56:14Z) - Path Sample-Analytic Gradient Estimators for Stochastic Binary Networks [78.76880041670904]
In neural networks with binary activations and or binary weights the training by gradient descent is complicated.
We propose a new method for this estimation problem combining sampling and analytic approximation steps.
We experimentally show higher accuracy in gradient estimation and demonstrate a more stable and better performing training in deep convolutional models.
arXiv Detail & Related papers (2020-06-04T21:51:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.