Learning CO$_2$ plume migration in faulted reservoirs with Graph Neural
Networks
- URL: http://arxiv.org/abs/2306.09648v1
- Date: Fri, 16 Jun 2023 06:47:47 GMT
- Title: Learning CO$_2$ plume migration in faulted reservoirs with Graph Neural
Networks
- Authors: Xin Ju, Fran\c{c}ois P. Hamon, Gege Wen, Rayan Kanfar, Mauricio
Araya-Polo, Hamdi A. Tchelepi
- Abstract summary: We develop a graph-based neural model for capturing the impact of faults on CO$$ plume migration.
We demonstrate that our approach can accurately predict the temporal evolution of gas saturation and pore pressure in a synthetic reservoir with faults.
This work highlights the potential of GNN-based methods to accurately and rapidly model subsurface flow with complex faults and fractures.
- Score: 0.3914676152740142
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep-learning-based surrogate models provide an efficient complement to
numerical simulations for subsurface flow problems such as CO$_2$ geological
storage. Accurately capturing the impact of faults on CO$_2$ plume migration
remains a challenge for many existing deep learning surrogate models based on
Convolutional Neural Networks (CNNs) or Neural Operators. We address this
challenge with a graph-based neural model leveraging recent developments in the
field of Graph Neural Networks (GNNs). Our model combines graph-based
convolution Long-Short-Term-Memory (GConvLSTM) with a one-step GNN model,
MeshGraphNet (MGN), to operate on complex unstructured meshes and limit
temporal error accumulation. We demonstrate that our approach can accurately
predict the temporal evolution of gas saturation and pore pressure in a
synthetic reservoir with impermeable faults. Our results exhibit a better
accuracy and a reduced temporal error accumulation compared to the standard MGN
model. We also show the excellent generalizability of our algorithm to mesh
configurations, boundary conditions, and heterogeneous permeability fields not
included in the training set. This work highlights the potential of GNN-based
methods to accurately and rapidly model subsurface flow with complex faults and
fractures.
Related papers
- Positional Encoder Graph Quantile Neural Networks for Geographic Data [4.277516034244117]
We introduce the Positional Graph Quantile Neural Network (PE-GQNN), a novel method that integrates PE-GNNs, Quantile Neural Networks, and recalibration techniques in a fully nonparametric framework.
Experiments on benchmark datasets demonstrate that PE-GQNN significantly outperforms existing state-of-the-art methods in both predictive accuracy and uncertainty quantification.
arXiv Detail & Related papers (2024-09-27T16:02:12Z) - Hybridization of Persistent Homology with Neural Networks for Time-Series Prediction: A Case Study in Wave Height [0.0]
We introduce a feature engineering method that enhances the predictive performance of neural network models.
Specifically, we leverage computational topology techniques to derive valuable topological features from input data.
For time-ahead predictions, the enhancements in $R2$ score were significant for FNNs, RNNs, LSTM, and GRU models.
arXiv Detail & Related papers (2024-09-03T01:26:21Z) - Tackling Oversmoothing in GNN via Graph Sparsification: A Truss-based Approach [1.4854797901022863]
We propose a novel and flexible truss-based graph sparsification model that prunes edges from dense regions of the graph.
We then utilize our sparsification model in the state-of-the-art baseline GNNs and pooling models, such as GIN, SAGPool, GMT, DiffPool, MinCutPool, HGP-SL, DMonPool, and AdamGNN.
arXiv Detail & Related papers (2024-07-16T17:21:36Z) - DFA-GNN: Forward Learning of Graph Neural Networks by Direct Feedback Alignment [57.62885438406724]
Graph neural networks are recognized for their strong performance across various applications.
BP has limitations that challenge its biological plausibility and affect the efficiency, scalability and parallelism of training neural networks for graph-based tasks.
We propose DFA-GNN, a novel forward learning framework tailored for GNNs with a case study of semi-supervised learning.
arXiv Detail & Related papers (2024-06-04T07:24:51Z) - Interpretable A-posteriori Error Indication for Graph Neural Network Surrogate Models [0.0]
This work introduces an interpretability enhancement procedure for graph neural networks (GNNs)
The end result is an interpretable GNN model that isolates regions in physical space, corresponding to sub-graphs, that are intrinsically linked to the forecasting task.
The interpretable GNNs can also be used to identify, during inference, graph nodes that correspond to a majority of the anticipated forecasting error.
arXiv Detail & Related papers (2023-11-13T18:37:07Z) - Re-Think and Re-Design Graph Neural Networks in Spaces of Continuous
Graph Diffusion Functionals [7.6435511285856865]
Graph neural networks (GNNs) are widely used in domains like social networks and biological systems.
locality assumption of GNNs hampers their ability to capture long-range dependencies and global patterns in graphs.
We propose a new inductive bias based on variational analysis, drawing inspiration from the Brachchronistoe problem.
arXiv Detail & Related papers (2023-07-01T04:44:43Z) - EvenNet: Ignoring Odd-Hop Neighbors Improves Robustness of Graph Neural
Networks [51.42338058718487]
Graph Neural Networks (GNNs) have received extensive research attention for their promising performance in graph machine learning.
Existing approaches, such as GCN and GPRGNN, are not robust in the face of homophily changes on test graphs.
We propose EvenNet, a spectral GNN corresponding to an even-polynomial graph filter.
arXiv Detail & Related papers (2022-05-27T10:48:14Z) - EIGNN: Efficient Infinite-Depth Graph Neural Networks [51.97361378423152]
Graph neural networks (GNNs) are widely used for modelling graph-structured data in numerous applications.
Motivated by this limitation, we propose a GNN model with infinite depth, which we call Efficient Infinite-Depth Graph Neural Networks (EIGNN)
We show that EIGNN has a better ability to capture long-range dependencies than recent baselines, and consistently achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-02-22T08:16:58Z) - Fast Learning of Graph Neural Networks with Guaranteed Generalizability:
One-hidden-layer Case [93.37576644429578]
Graph neural networks (GNNs) have made great progress recently on learning from graph-structured data in practice.
We provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.
arXiv Detail & Related papers (2020-06-25T00:45:52Z) - Stochastic Graph Neural Networks [123.39024384275054]
Graph neural networks (GNNs) model nonlinear representations in graph data with applications in distributed agent coordination, control, and planning.
Current GNN architectures assume ideal scenarios and ignore link fluctuations that occur due to environment, human factors, or external attacks.
In these situations, the GNN fails to address its distributed task if the topological randomness is not considered accordingly.
arXiv Detail & Related papers (2020-06-04T08:00:00Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.