Efficient Uncertainty Quantification for Dynamic Subsurface Flow with
Surrogate by Theory-guided Neural Network
- URL: http://arxiv.org/abs/2004.13560v1
- Date: Sat, 25 Apr 2020 12:41:57 GMT
- Title: Efficient Uncertainty Quantification for Dynamic Subsurface Flow with
Surrogate by Theory-guided Neural Network
- Authors: Nanzhe Wang, Haibin Chang, Dongxiao Zhang
- Abstract summary: We propose a methodology for efficient uncertainty quantification for dynamic subsurface flow with a surrogate constructed by the Theory-guided Neural Network (TgNN)
parameters, time and location comprise the input of the neural network, while the quantity of interest is the output.
The trained neural network can predict solutions of subsurface flow problems with new parameters.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Subsurface flow problems usually involve some degree of uncertainty.
Consequently, uncertainty quantification is commonly necessary for subsurface
flow prediction. In this work, we propose a methodology for efficient
uncertainty quantification for dynamic subsurface flow with a surrogate
constructed by the Theory-guided Neural Network (TgNN). The TgNN here is
specially designed for problems with stochastic parameters. In the TgNN,
stochastic parameters, time and location comprise the input of the neural
network, while the quantity of interest is the output. The neural network is
trained with available simulation data, while being simultaneously guided by
theory (e.g., the governing equation, boundary conditions, initial conditions,
etc.) of the underlying problem. The trained neural network can predict
solutions of subsurface flow problems with new stochastic parameters. With the
TgNN surrogate, the Monte Carlo (MC) method can be efficiently implemented for
uncertainty quantification. The proposed methodology is evaluated with
two-dimensional dynamic saturated flow problems in porous medium. Numerical
results show that the TgNN based surrogate can significantly improve the
efficiency of uncertainty quantification tasks compared with simulation based
implementation. Further investigations regarding stochastic fields with smaller
correlation length, larger variance, changing boundary values and
out-of-distribution variances are performed, and satisfactory results are
obtained.
Related papers
- kNN Algorithm for Conditional Mean and Variance Estimation with
Automated Uncertainty Quantification and Variable Selection [8.429136647141487]
We introduce a kNN-based regression method that synergizes the scalability and adaptability of traditional non-parametric kNN models.
This method focuses on accurately estimating the conditional mean and variance of random response variables.
It is particularly notable in biomedical applications as demonstrated in two case studies.
arXiv Detail & Related papers (2024-02-02T18:54:18Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Bayesian deep learning framework for uncertainty quantification in high
dimensions [6.282068591820945]
We develop a novel deep learning method for uncertainty quantification in partial differential equations based on Bayesian neural network (BNN) and Hamiltonian Monte Carlo (HMC)
A BNN efficiently learns the posterior distribution of the parameters in deep neural networks by performing Bayesian inference on the network parameters.
The posterior distribution is efficiently sampled using HMC to quantify uncertainties in the system.
arXiv Detail & Related papers (2022-10-21T05:20:06Z) - Uncertainty quantification of two-phase flow in porous media via
coupled-TgNN surrogate model [6.705438773768439]
Uncertainty quantification (UQ) of subsurface two-phase flow usually requires numerous executions of forward simulations under varying conditions.
In this work, a novel coupled theory-guided neural network (TgNN) based surrogate model is built to facilitate efficiency under the premise of satisfactory accuracy.
arXiv Detail & Related papers (2022-05-28T02:33:46Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Robust Learning of Physics Informed Neural Networks [2.86989372262348]
Physics-informed Neural Networks (PINNs) have been shown to be effective in solving partial differential equations.
This paper shows that a PINN can be sensitive to errors in training data and overfit itself in dynamically propagating these errors over the domain of the solution of the PDE.
arXiv Detail & Related papers (2021-10-26T00:10:57Z) - Influence Estimation and Maximization via Neural Mean-Field Dynamics [60.91291234832546]
We propose a novel learning framework using neural mean-field (NMF) dynamics for inference and estimation problems.
Our framework can simultaneously learn the structure of the diffusion network and the evolution of node infection probabilities.
arXiv Detail & Related papers (2021-06-03T00:02:05Z) - Stochastic Graph Neural Networks [123.39024384275054]
Graph neural networks (GNNs) model nonlinear representations in graph data with applications in distributed agent coordination, control, and planning.
Current GNN architectures assume ideal scenarios and ignore link fluctuations that occur due to environment, human factors, or external attacks.
In these situations, the GNN fails to address its distributed task if the topological randomness is not considered accordingly.
arXiv Detail & Related papers (2020-06-04T08:00:00Z) - Stable Neural Flows [15.318500611972441]
We introduce a provably stable variant of neural ordinary differential equations (neural ODEs) whose trajectories evolve on an energy functional parametrised by a neural network.
The learning procedure is cast as an optimal control problem, and an approximate solution is proposed based on adjoint sensivity analysis.
arXiv Detail & Related papers (2020-03-18T06:27:21Z) - Learning Likelihoods with Conditional Normalizing Flows [54.60456010771409]
Conditional normalizing flows (CNFs) are efficient in sampling and inference.
We present a study of CNFs where the base density to output space mapping is conditioned on an input x, to model conditional densities p(y|x)
arXiv Detail & Related papers (2019-11-29T19:17:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.