Learning multi-phase flow and transport in fractured porous media with auto-regressive and recurrent graph neural networks
- URL: http://arxiv.org/abs/2502.17512v1
- Date: Sat, 22 Feb 2025 10:12:52 GMT
- Title: Learning multi-phase flow and transport in fractured porous media with auto-regressive and recurrent graph neural networks
- Authors: Mohammed Al Kobaisi, Wenjuan Zhang, Waleed Diab, Hadi Hajibeygi,
- Abstract summary: We propose to learn the complex multi-phase flow and transport dynamics in fractured porous media with graph neural networks (GNNs)<n>GNNs are well suited for this task due to the unstructured topology of the grid resulting from the Embedded Discrete Fracture Model (EDFM) discretization.<n>We show that both GNNs generalize well to unseen fracture realizations, with comparable performance in forecasting saturation sequences, and slightly better performance for the recurrent GNN in predicting pressure sequences.
- Score: 0.3749861135832073
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the past three decades, a wide array of computational methodologies and simulation frameworks has emerged to address the complexities of modeling multi-phase flow and transport processes in fractured porous media. The conformal mesh approaches which explicitly align the computational grid with fracture surfaces are considered by many to be the most accurate. However, such methods require excessive fine-scale meshing, rendering them impractical for large or complex fracture networks. In this work, we propose to learn the complex multi-phase flow and transport dynamics in fractured porous media with graph neural networks (GNN). GNNs are well suited for this task due to the unstructured topology of the computation grid resulting from the Embedded Discrete Fracture Model (EDFM) discretization. We propose two deep learning architectures, a GNN and a recurrent GNN. Both networks follow a two-stage training strategy: an autoregressive one step roll-out, followed by a fine-tuning step where the model is supervised using the whole ground-truth sequence. We demonstrate that the two-stage training approach is effective in mitigating error accumulation during autoregressive model rollouts in the testing phase. Our findings indicate that both GNNs generalize well to unseen fracture realizations, with comparable performance in forecasting saturation sequences, and slightly better performance for the recurrent GNN in predicting pressure sequences. While the second stage of training proved to be beneficial for the GNN model, its impact on the recurrent GNN model was less pronounced. Finally, the performance of both GNNs for temporal extrapolation is tested. The recurrent GNN significantly outperformed the GNN in terms of accuracy, thereby underscoring its superior capability in predicting long sequences.
Related papers
- A Self-organizing Interval Type-2 Fuzzy Neural Network for Multi-Step Time Series Prediction [9.546043411729206]
Interval type 2 fuzzy neural network (IT2FNN) has shown exceptional performance in uncertainty modelling for single-step prediction tasks.
This paper proposes a new selforganizing interval type-2 fuzzy neural network with multiple outputs (SOIT2FNN-MO)
Experimental results on chaotic and microgrid prediction problems demonstrate that SOIT2FNN-MO outperforms state-of-the-art methods.
arXiv Detail & Related papers (2024-07-10T19:35:44Z) - CNN2GNN: How to Bridge CNN with GNN [59.42117676779735]
We propose a novel CNN2GNN framework to unify CNN and GNN together via distillation.
The performance of distilled boosted'' two-layer GNN on Mini-ImageNet is much higher than CNN containing dozens of layers such as ResNet152.
arXiv Detail & Related papers (2024-04-23T08:19:08Z) - Multicoated and Folded Graph Neural Networks with Strong Lottery Tickets [3.0894823679470087]
This paper introduces the Multi-Stage Folding and Unshared Masks methods to expand the search space in terms of both architecture and parameters.
By achieving high sparsity, competitive performance, and high memory efficiency with up to 98.7% reduction, it demonstrates suitability for energy-efficient graph processing.
arXiv Detail & Related papers (2023-12-06T02:16:44Z) - Jointly-Learned Exit and Inference for a Dynamic Neural Network : JEI-DNN [20.380620709345898]
Early-exiting dynamic neural networks (EDNN) allow a model to make some of its predictions from intermediate layers (i.e., early-exit)
Training an EDNN architecture is challenging as it consists of two intertwined components: the gating mechanism (GM) that controls early-exiting decisions and the intermediate inference modules (IMs) that perform inference from intermediate representations.
We propose a novel architecture that connects these two modules. This leads to significant performance improvements on classification datasets and enables better uncertainty characterization capabilities.
arXiv Detail & Related papers (2023-10-13T14:56:38Z) - Learning CO$_2$ plume migration in faulted reservoirs with Graph Neural
Networks [0.3914676152740142]
We develop a graph-based neural model for capturing the impact of faults on CO$$ plume migration.
We demonstrate that our approach can accurately predict the temporal evolution of gas saturation and pore pressure in a synthetic reservoir with faults.
This work highlights the potential of GNN-based methods to accurately and rapidly model subsurface flow with complex faults and fractures.
arXiv Detail & Related papers (2023-06-16T06:47:47Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Spatial-Temporal-Fusion BNN: Variational Bayesian Feature Layer [77.78479877473899]
We design a spatial-temporal-fusion BNN for efficiently scaling BNNs to large models.
Compared to vanilla BNNs, our approach can greatly reduce the training time and the number of parameters, which contributes to scale BNNs efficiently.
arXiv Detail & Related papers (2021-12-12T17:13:14Z) - Surrogate and inverse modeling for two-phase flow in porous media via
theory-guided convolutional neural network [0.0]
Theory-guided convolutional neural network (TgCNN) framework is extended to two-phase porous media flow problems.
The two principal variables of the considered problem, pressure and saturation, are approximated simultaneously with two CNNs.
TgCNN surrogates can achieve better accuracy than ordinary CNN surrogates in two-phase flow problems.
arXiv Detail & Related papers (2021-10-12T14:52:37Z) - A Biased Graph Neural Network Sampler with Near-Optimal Regret [57.70126763759996]
Graph neural networks (GNN) have emerged as a vehicle for applying deep network architectures to graph and relational data.
In this paper, we build upon existing work and treat GNN neighbor sampling as a multi-armed bandit problem.
We introduce a newly-designed reward function that introduces some degree of bias designed to reduce variance and avoid unstable, possibly-unbounded payouts.
arXiv Detail & Related papers (2021-03-01T15:55:58Z) - Fast Learning of Graph Neural Networks with Guaranteed Generalizability:
One-hidden-layer Case [93.37576644429578]
Graph neural networks (GNNs) have made great progress recently on learning from graph-structured data in practice.
We provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.
arXiv Detail & Related papers (2020-06-25T00:45:52Z) - Stochastic Graph Neural Networks [123.39024384275054]
Graph neural networks (GNNs) model nonlinear representations in graph data with applications in distributed agent coordination, control, and planning.
Current GNN architectures assume ideal scenarios and ignore link fluctuations that occur due to environment, human factors, or external attacks.
In these situations, the GNN fails to address its distributed task if the topological randomness is not considered accordingly.
arXiv Detail & Related papers (2020-06-04T08:00:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.