Surrogate and inverse modeling for two-phase flow in porous media via
theory-guided convolutional neural network
- URL: http://arxiv.org/abs/2110.10080v1
- Date: Tue, 12 Oct 2021 14:52:37 GMT
- Title: Surrogate and inverse modeling for two-phase flow in porous media via
theory-guided convolutional neural network
- Authors: Nanzhe Wang, Haibin Chang, Dongxiao Zhang
- Abstract summary: Theory-guided convolutional neural network (TgCNN) framework is extended to two-phase porous media flow problems.
The two principal variables of the considered problem, pressure and saturation, are approximated simultaneously with two CNNs.
TgCNN surrogates can achieve better accuracy than ordinary CNN surrogates in two-phase flow problems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The theory-guided convolutional neural network (TgCNN) framework, which can
incorporate discretized governing equation residuals into the training of
convolutional neural networks (CNNs), is extended to two-phase porous media
flow problems in this work. The two principal variables of the considered
problem, pressure and saturation, are approximated simultaneously with two
CNNs, respectively. Pressure and saturation are coupled with each other in the
governing equations, and thus the two networks are also mutually conditioned in
the training process by the discretized governing equations, which also
increases the difficulty of model training. The coupled and discretized
equations can provide valuable information in the training process. With the
assistance of theory-guidance, the TgCNN surrogates can achieve better accuracy
than ordinary CNN surrogates in two-phase flow problems. Moreover, a piecewise
training strategy is proposed for the scenario with varying well controls, in
which the TgCNN surrogates are constructed for different segments on the time
dimension and stacked together to predict solutions for the whole time-span.
For scenarios with larger variance of the formation property field, the TgCNN
surrogates can also achieve satisfactory performance. The constructed TgCNN
surrogates are further used for inversion of permeability fields by combining
them with the iterative ensemble smoother (IES) algorithm, and sufficient
inversion accuracy is obtained with improved efficiency.
Related papers
- Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - An Adaptive and Stability-Promoting Layerwise Training Approach for Sparse Deep Neural Network Architecture [0.0]
This work presents a two-stage adaptive framework for developing deep neural network (DNN) architectures that generalize well for a given training data set.
In the first stage, a layerwise training approach is adopted where a new layer is added each time and trained independently by freezing parameters in the previous layers.
We introduce a epsilon-delta stability-promoting concept as a desirable property for a learning algorithm and show that employing manifold regularization yields a epsilon-delta stability-promoting algorithm.
arXiv Detail & Related papers (2022-11-13T09:51:16Z) - Uncertainty quantification of two-phase flow in porous media via
coupled-TgNN surrogate model [6.705438773768439]
Uncertainty quantification (UQ) of subsurface two-phase flow usually requires numerous executions of forward simulations under varying conditions.
In this work, a novel coupled theory-guided neural network (TgNN) based surrogate model is built to facilitate efficiency under the premise of satisfactory accuracy.
arXiv Detail & Related papers (2022-05-28T02:33:46Z) - Performance and accuracy assessments of an incompressible fluid solver
coupled with a deep Convolutional Neural Network [0.0]
The resolution of the Poisson equation is usually one of the most computationally intensive steps for incompressible fluid solvers.
CNN has been introduced to solve this equation, leading to significant inference time reduction.
A hybrid strategy is developed, which couples a CNN with a traditional iterative solver to ensure a user-defined accuracy level.
arXiv Detail & Related papers (2021-09-20T08:30:29Z) - Regularized Sequential Latent Variable Models with Adversarial Neural
Networks [33.74611654607262]
We will present different ways of using high level latent random variables in RNN to model the variability in the sequential data.
We will explore possible ways of using adversarial method to train a variational RNN model.
arXiv Detail & Related papers (2021-08-10T08:05:14Z) - LocalDrop: A Hybrid Regularization for Deep Neural Networks [98.30782118441158]
We propose a new approach for the regularization of neural networks by the local Rademacher complexity called LocalDrop.
A new regularization function for both fully-connected networks (FCNs) and convolutional neural networks (CNNs) has been developed based on the proposed upper bound of the local Rademacher complexity.
arXiv Detail & Related papers (2021-03-01T03:10:11Z) - A Convergence Theory Towards Practical Over-parameterized Deep Neural
Networks [56.084798078072396]
We take a step towards closing the gap between theory and practice by significantly improving the known theoretical bounds on both the network width and the convergence time.
We show that convergence to a global minimum is guaranteed for networks with quadratic widths in the sample size and linear in their depth at a time logarithmic in both.
Our analysis and convergence bounds are derived via the construction of a surrogate network with fixed activation patterns that can be transformed at any time to an equivalent ReLU network of a reasonable size.
arXiv Detail & Related papers (2021-01-12T00:40:45Z) - A Lagrangian Dual-based Theory-guided Deep Neural Network [0.0]
The Lagrangian dual-based TgNN (TgNN-LD) is proposed to improve the effectiveness of TgNN.
Experimental results demonstrate the superiority of the Lagrangian dual-based TgNN.
arXiv Detail & Related papers (2020-08-24T02:06:19Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - Dynamic Hierarchical Mimicking Towards Consistent Optimization
Objectives [73.15276998621582]
We propose a generic feature learning mechanism to advance CNN training with enhanced generalization ability.
Partially inspired by DSN, we fork delicately designed side branches from the intermediate layers of a given neural network.
Experiments on both category and instance recognition tasks demonstrate the substantial improvements of our proposed method.
arXiv Detail & Related papers (2020-03-24T09:56:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.