Deep-learning-based upscaling method for geologic models via
theory-guided convolutional neural network
- URL: http://arxiv.org/abs/2201.00698v1
- Date: Fri, 31 Dec 2021 08:10:48 GMT
- Title: Deep-learning-based upscaling method for geologic models via
theory-guided convolutional neural network
- Authors: Nanzhe Wang, Qinzhuo Liao, Haibin Chang, Dongxiao Zhang
- Abstract summary: A deep convolutional neural network (CNN) is trained to approximate the relationship between the coarse grid of hydraulic conductivity fields and the hydraulic heads.
With the physical information considered, dependence on the data volume of training the deep CNN model can be reduced greatly.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Large-scale or high-resolution geologic models usually comprise a huge number
of grid blocks, which can be computationally demanding and time-consuming to
solve with numerical simulators. Therefore, it is advantageous to upscale
geologic models (e.g., hydraulic conductivity) from fine-scale (high-resolution
grids) to coarse-scale systems. Numerical upscaling methods have been proven to
be effective and robust for coarsening geologic models, but their efficiency
remains to be improved. In this work, a deep-learning-based method is proposed
to upscale the fine-scale geologic models, which can assist to improve
upscaling efficiency significantly. In the deep learning method, a deep
convolutional neural network (CNN) is trained to approximate the relationship
between the coarse grid of hydraulic conductivity fields and the hydraulic
heads, which can then be utilized to replace the numerical solvers while
solving the flow equations for each coarse block. In addition, physical laws
(e.g., governing equations and periodic boundary conditions) can also be
incorporated into the training process of the deep CNN model, which is termed
the theory-guided convolutional neural network (TgCNN). With the physical
information considered, dependence on the data volume of training the deep
learning models can be reduced greatly. Several subsurface flow cases are
introduced to test the performance of the proposed deep-learning-based
upscaling method, including 2D and 3D cases, and isotropic and anisotropic
cases. The results show that the deep learning method can provide equivalent
upscaling accuracy to the numerical method, and efficiency can be improved
significantly compared to numerical upscaling.
Related papers
- Streamflow Prediction with Uncertainty Quantification for Water Management: A Constrained Reasoning and Learning Approach [27.984958596544278]
This paper studies a constrained reasoning and learning (CRL) approach where physical laws represented as logical constraints are integrated as a layer in the deep neural network.
To address small data setting, we develop a theoretically-grounded training approach to improve the generalization accuracy of deep models.
arXiv Detail & Related papers (2024-05-31T18:53:53Z) - Geometry-Informed Neural Operator for Large-Scale 3D PDEs [76.06115572844882]
We propose the geometry-informed neural operator (GINO) to learn the solution operator of large-scale partial differential equations.
We successfully trained GINO to predict the pressure on car surfaces using only five hundred data points.
arXiv Detail & Related papers (2023-09-01T16:59:21Z) - Differentiable Turbulence II [0.0]
We develop a framework for integrating deep learning models into a generic finite element numerical scheme for solving the Navier-Stokes equations.
We show that the learned closure can achieve accuracy comparable to traditional large eddy simulation on a finer grid that amounts to an equivalent speedup of 10x.
arXiv Detail & Related papers (2023-07-25T14:27:49Z) - Towards a Better Theoretical Understanding of Independent Subnetwork Training [56.24689348875711]
We take a closer theoretical look at Independent Subnetwork Training (IST)
IST is a recently proposed and highly effective technique for solving the aforementioned problems.
We identify fundamental differences between IST and alternative approaches, such as distributed methods with compressed communication.
arXiv Detail & Related papers (2023-06-28T18:14:22Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Multisymplectic Formulation of Deep Learning Using Mean--Field Type
Control and Nonlinear Stability of Training Algorithm [0.0]
We formulate training of deep neural networks as a hydrodynamics system with a multisymplectic structure.
For that, the deep neural network is modelled using a differential equation and, thereby, mean-field type control is used to train it.
The numerical scheme, yields an approximated solution which is also an exact solution of a hydrodynamics system with a multisymplectic structure.
arXiv Detail & Related papers (2022-07-07T23:14:12Z) - Neural Galerkin Schemes with Active Learning for High-Dimensional
Evolution Equations [44.89798007370551]
This work proposes Neural Galerkin schemes based on deep learning that generate training data with active learning for numerically solving high-dimensional partial differential equations.
Neural Galerkin schemes build on the Dirac-Frenkel variational principle to train networks by minimizing the residual sequentially over time.
Our finding is that the active form of gathering training data of the proposed Neural Galerkin schemes is key for numerically realizing the expressive power of networks in high dimensions.
arXiv Detail & Related papers (2022-03-02T19:09:52Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - A Gradient-based Deep Neural Network Model for Simulating Multiphase
Flow in Porous Media [1.5791732557395552]
We describe a gradient-based deep neural network (GDNN) constrained by the physics related to multiphase flow in porous media.
We demonstrate that GDNN can effectively predict the nonlinear patterns of subsurface responses.
arXiv Detail & Related papers (2021-04-30T02:14:00Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.