Constructing Sub-scale Surrogate Model for Proppant Settling in Inclined
Fractures from Simulation Data with Multi-fidelity Neural Network
- URL: http://arxiv.org/abs/2109.12311v1
- Date: Sat, 25 Sep 2021 08:31:33 GMT
- Title: Constructing Sub-scale Surrogate Model for Proppant Settling in Inclined
Fractures from Simulation Data with Multi-fidelity Neural Network
- Authors: Pengfei Tang, Junsheng Zeng, Dongxiao Zhang, and Heng Li
- Abstract summary: Particle settling in inclined channels is an important phenomenon that occurs during hydraulic fracturing of shale gas production.
In this work, a new method is proposed and utilized, i.e., the multi-fidelity neural network (MFNN), to construct a settling surrogate model.
The results demonstrate that constructing the settling surrogate with the MFNN can reduce the need for high-fidelity data and thus computational cost by 80%.
This opens novel pathways for rapidly predicting proppant settling velocity in reservoir applications.
- Score: 1.045294624175056
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Particle settling in inclined channels is an important phenomenon that occurs
during hydraulic fracturing of shale gas production. Generally, in order to
accurately simulate the large-scale (field-scale) proppant transport process,
constructing a fast and accurate sub-scale proppant settling model, or
surrogate model, becomes a critical issue, since mapping between physical
parameters and proppant settling velocity is complex. Previously, particle
settling has usually been investigated via high-fidelity experiments and
meso-scale numerical simulations, both of which are time-consuming. In this
work, a new method is proposed and utilized, i.e., the multi-fidelity neural
network (MFNN), to construct a settling surrogate model, which could utilize
both high-fidelity and low-fidelity (thus, less expensive) data. The results
demonstrate that constructing the settling surrogate with the MFNN can reduce
the need for high-fidelity data and thus computational cost by 80%, while the
accuracy lost is less than 5% compared to a high-fidelity surrogate. Moreover,
the investigated particle settling surrogate is applied in macro-scale proppant
transport simulation, which shows that the settling model is significant to
proppant transport and yields accurate results. This opens novel pathways for
rapidly predicting proppant settling velocity in reservoir applications.
Related papers
- A Microstructure-based Graph Neural Network for Accelerating Multiscale
Simulations [0.0]
We introduce an alternative surrogate modeling strategy that allows for keeping the multiscale nature of the problem.
We achieve this by predicting full-field microscopic strains using a graph neural network (GNN) while retaining the microscopic material model.
We demonstrate for several challenging scenarios that the surrogate can predict complex macroscopic stress-strain paths.
arXiv Detail & Related papers (2024-02-20T15:54:24Z) - Multi-fidelity Fourier Neural Operator for Fast Modeling of Large-Scale
Geological Carbon Storage [0.0]
We propose to use a multi-fidelity Fourier neural operator (FNO) to solve large-scale carbon storage problems.
We first test the model efficacy on a GCS reservoir model being discretized into 110k grid cells.
The multi-fidelity model can predict with accuracy comparable to a high-fidelity model trained with the same amount of high-fidelity data with 81% less data generation costs.
arXiv Detail & Related papers (2023-08-17T17:44:59Z) - Multi-fidelity prediction of fluid flow and temperature field based on
transfer learning using Fourier Neural Operator [10.104417481736833]
This work proposes a novel multi-fidelity learning method based on the Fourier Neural Operator.
It uses abundant low-fidelity data and limited high-fidelity data under transfer learning paradigm.
Three typical fluid and temperature prediction problems are chosen to validate the accuracy of the proposed multi-fidelity model.
arXiv Detail & Related papers (2023-04-14T07:46:03Z) - Physics-informed machine learning with differentiable programming for
heterogeneous underground reservoir pressure management [64.17887333976593]
Avoiding over-pressurization in subsurface reservoirs is critical for applications like CO2 sequestration and wastewater injection.
Managing the pressures by controlling injection/extraction are challenging because of complex heterogeneity in the subsurface.
We use differentiable programming with a full-physics model and machine learning to determine the fluid extraction rates that prevent over-pressurization.
arXiv Detail & Related papers (2022-06-21T20:38:13Z) - Learning Large-scale Subsurface Simulations with a Hybrid Graph Network
Simulator [57.57321628587564]
We introduce Hybrid Graph Network Simulator (HGNS) for learning reservoir simulations of 3D subsurface fluid flows.
HGNS consists of a subsurface graph neural network (SGNN) to model the evolution of fluid flows, and a 3D-U-Net to model the evolution of pressure.
Using an industry-standard subsurface flow dataset (SPE-10) with 1.1 million cells, we demonstrate that HGNS is able to reduce the inference time up to 18 times compared to standard subsurface simulators.
arXiv Detail & Related papers (2022-06-15T17:29:57Z) - Multi-fidelity Hierarchical Neural Processes [79.0284780825048]
Multi-fidelity surrogate modeling reduces the computational cost by fusing different simulation outputs.
We propose Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-fidelity surrogate modeling.
We evaluate MF-HNP on epidemiology and climate modeling tasks, achieving competitive performance in terms of accuracy and uncertainty estimation.
arXiv Detail & Related papers (2022-06-10T04:54:13Z) - Deep Bayesian Active Learning for Accelerating Stochastic Simulation [74.58219903138301]
Interactive Neural Process (INP) is a deep active learning framework for simulations and with active learning approaches.
For active learning, we propose a novel acquisition function, Latent Information Gain (LIG), calculated in the latent space of NP based models.
The results demonstrate STNP outperforms the baselines in the learning setting and LIG achieves the state-of-the-art for active learning.
arXiv Detail & Related papers (2021-06-05T01:31:51Z) - Hybrid Physics and Deep Learning Model for Interpretable Vehicle State
Prediction [75.1213178617367]
We propose a hybrid approach combining deep learning and physical motion models.
We achieve interpretability by restricting the output range of the deep neural network as part of the hybrid model.
The results show that our hybrid model can improve model interpretability with no decrease in accuracy compared to existing deep learning approaches.
arXiv Detail & Related papers (2021-03-11T15:21:08Z) - Machine learning for rapid discovery of laminar flow channel wall
modifications that enhance heat transfer [56.34005280792013]
We present a combination of accurate numerical simulations of arbitrary, flat, and non-flat channels and machine learning models predicting drag coefficient and Stanton number.
We show that convolutional neural networks (CNN) can accurately predict the target properties at a fraction of the time of numerical simulations.
arXiv Detail & Related papers (2021-01-19T16:14:02Z) - Multi-fidelity Generative Deep Learning Turbulent Flows [0.0]
In computational fluid dynamics, there is an inevitable trade off between accuracy and computational cost.
In this work, a novel multi-fidelity deep generative model is introduced for the surrogate modeling of high-fidelity turbulent flow fields.
The resulting surrogate is able to generate physically accurate turbulent realizations at a computational cost magnitudes lower than that of a high-fidelity simulation.
arXiv Detail & Related papers (2020-06-08T16:37:48Z) - Combining data assimilation and machine learning to emulate a dynamical
model from sparse and noisy observations: a case study with the Lorenz 96
model [0.0]
The method consists in applying iteratively a data assimilation step, here an ensemble Kalman filter, and a neural network.
Data assimilation is used to optimally combine a surrogate model with sparse data.
The output analysis is spatially complete and is used as a training set by the neural network to update the surrogate model.
Numerical experiments have been carried out using the chaotic 40-variables Lorenz 96 model, proving both convergence and statistical skill of the proposed hybrid approach.
arXiv Detail & Related papers (2020-01-06T12:26:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.