Wasserstein Generative Adversarial Uncertainty Quantification in
Physics-Informed Neural Networks
- URL: http://arxiv.org/abs/2108.13054v1
- Date: Mon, 30 Aug 2021 08:18:58 GMT
- Title: Wasserstein Generative Adversarial Uncertainty Quantification in
Physics-Informed Neural Networks
- Authors: Yihang Gao and Michael K. Ng
- Abstract summary: Wasserstein Generative Adversarial Networks (WGANs) are designed to learn the uncertainty in solutions of partial differential equations.
We show that our physics-informed WGANs have higher requirement for the capacity of discriminators than that of generators.
- Score: 19.15477953428763
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study a physics-informed algorithm for Wasserstein
Generative Adversarial Networks (WGANs) for uncertainty quantification in
solutions of partial differential equations. By using groupsort activation
functions in adversarial network discriminators, network generators are
utilized to learn the uncertainty in solutions of partial differential
equations observed from the initial/boundary data. Under mild assumptions, we
show that the generalization error of the computed generator converges to the
approximation error of the network with high probability, when the number of
samples are sufficiently taken. According to our established error bound, we
also find that our physics-informed WGANs have higher requirement for the
capacity of discriminators than that of generators. Numerical results on
synthetic examples of partial differential equations are reported to validate
our theoretical results and demonstrate how uncertainty quantification can be
obtained for solutions of partial differential equations and the distributions
of initial/boundary data.
Related papers
- Physics-Informed Generator-Encoder Adversarial Networks with Latent
Space Matching for Stochastic Differential Equations [14.999611448900822]
We propose a new class of physics-informed neural networks to address the challenges posed by forward, inverse, and mixed problems in differential equations.
Our model consists of two key components: the generator and the encoder, both updated alternately by gradient descent.
In contrast to previous approaches, we employ an indirect matching that operates within the lower-dimensional latent feature space.
arXiv Detail & Related papers (2023-11-03T04:29:49Z) - Structured Radial Basis Function Network: Modelling Diversity for
Multiple Hypotheses Prediction [51.82628081279621]
Multi-modal regression is important in forecasting nonstationary processes or with a complex mixture of distributions.
A Structured Radial Basis Function Network is presented as an ensemble of multiple hypotheses predictors for regression problems.
It is proved that this structured model can efficiently interpolate this tessellation and approximate the multiple hypotheses target distribution.
arXiv Detail & Related papers (2023-09-02T01:27:53Z) - PI-VEGAN: Physics Informed Variational Embedding Generative Adversarial
Networks for Stochastic Differential Equations [14.044012646069552]
We present a new category of physics-informed neural networks called physics informed embedding generative adversarial network (PI-VEGAN)
PI-VEGAN effectively tackles forward, inverse, and mixed problems of differential equations.
We evaluate the effectiveness of PI-VEGAN in addressing forward, inverse, and mixed problems that require the concurrent calculation of system parameters and solutions.
arXiv Detail & Related papers (2023-07-21T01:18:02Z) - Energy-Dissipative Evolutionary Deep Operator Neural Networks [12.764072441220172]
Energy-Dissipative Evolutionary Deep Operator Neural Network is an operator learning neural network.
It is designed to seed numerical solutions for a class of partial differential equations.
arXiv Detail & Related papers (2023-06-09T22:11:16Z) - Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural
Networks on Coupled Ordinary Differential Equations [64.78260098263489]
In this work, we assess the ability of physics-informed neural networks (PINNs) to solve increasingly-complex coupled ordinary differential equations (ODEs)
We show that PINNs eventually fail to produce correct solutions to these benchmarks as their complexity increases.
We identify several reasons why this may be the case, including insufficient network capacity, poor conditioning of the ODEs, and high local curvature, as measured by the Laplacian of the PINN loss.
arXiv Detail & Related papers (2022-10-14T15:01:32Z) - Learning Distributions by Generative Adversarial Networks: Approximation
and Generalization [0.6768558752130311]
We study how well generative adversarial networks learn from finite samples by analyzing the convergence rates of these models.
Our analysis is based on a new inequality oracle that decomposes the estimation error of GAN into the discriminator and generator approximation errors.
For generator approximation error, we show that neural network can approximately transform a low-dimensional source distribution to a high-dimensional target distribution.
arXiv Detail & Related papers (2022-05-25T09:26:17Z) - Robust Estimation for Nonparametric Families via Generative Adversarial
Networks [92.64483100338724]
We provide a framework for designing Generative Adversarial Networks (GANs) to solve high dimensional robust statistics problems.
Our work extend these to robust mean estimation, second moment estimation, and robust linear regression.
In terms of techniques, our proposed GAN losses can be viewed as a smoothed and generalized Kolmogorov-Smirnov distance.
arXiv Detail & Related papers (2022-02-02T20:11:33Z) - A Robust and Flexible EM Algorithm for Mixtures of Elliptical
Distributions with Missing Data [71.9573352891936]
This paper tackles the problem of missing data imputation for noisy and non-Gaussian data.
A new EM algorithm is investigated for mixtures of elliptical distributions with the property of handling potential missing data.
Experimental results on synthetic data demonstrate that the proposed algorithm is robust to outliers and can be used with non-Gaussian data.
arXiv Detail & Related papers (2022-01-28T10:01:37Z) - Approximating Probability Distributions by using Wasserstein Generative
Adversarial Networks [16.005358327268194]
Wasserstein generative adversarial networks (WGANs) with GroupSort neural networks as their discriminators are studied.
It is shown that the error bound of the approximation for the target distribution depends on the width and depth (capacity) of the generators and discriminators.
arXiv Detail & Related papers (2021-03-18T07:40:13Z) - General stochastic separation theorems with optimal bounds [68.8204255655161]
Phenomenon of separability was revealed and used in machine learning to correct errors of Artificial Intelligence (AI) systems and analyze AI instabilities.
Errors or clusters of errors can be separated from the rest of the data.
The ability to correct an AI system also opens up the possibility of an attack on it, and the high dimensionality induces vulnerabilities caused by the same separability.
arXiv Detail & Related papers (2020-10-11T13:12:41Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.