Constrained Transformer-Based Porous Media Generation to Spatial Distribution of Rock Properties
- URL: http://arxiv.org/abs/2410.21462v1
- Date: Mon, 28 Oct 2024 19:03:33 GMT
- Title: Constrained Transformer-Based Porous Media Generation to Spatial Distribution of Rock Properties
- Authors: Zihan Ren, Sanjay Srinivasan, Dustin Crandall,
- Abstract summary: Pore-scale modeling of rock images based on information in 3D micro-computed tomography data is crucial for studying complex subsurface processes.
We propose a two-stage modeling framework that combines a Vector Quantized Variational Autoencoder (VQVAE) and a transformer model for spatial upscaling and arbitrary-size 3D porous media reconstruction.
- Score: 0.0
- License:
- Abstract: Pore-scale modeling of rock images based on information in 3D micro-computed tomography data is crucial for studying complex subsurface processes such as CO2 and brine multiphase flow during Geologic Carbon Storage (GCS). While deep learning models can generate 3D rock microstructures that match static rock properties, they have two key limitations: they don't account for the spatial distribution of rock properties that can have an important influence on the flow and transport characteristics (such as permeability and relative permeability) of the rock and they generate structures below the representative elementary volume (REV) scale for those transport properties. Addressing these issues is crucial for building a consistent workflow between pore-scale analysis and field-scale modeling. To address these challenges, we propose a two-stage modeling framework that combines a Vector Quantized Variational Autoencoder (VQVAE) and a transformer model for spatial upscaling and arbitrary-size 3D porous media reconstruction in an autoregressive manner. The VQVAE first compresses and quantizes sub-volume training images into low-dimensional tokens, while we train a transformer to spatially assemble these tokens into larger images following specific spatial order. By employing a multi-token generation strategy, our approach preserves both sub-volume integrity and spatial relationships among these sub-image patches. We demonstrate the effectiveness of our multi-token transformer generation approach and validate it using real data from a test well, showcasing its potential to generate models for the porous media at the well scale using only a spatial porosity model. The interpolated representative porous media that reflect field-scale geological properties accurately model transport properties, including permeability and multiphase flow relative permeability of CO2 and brine.
Related papers
- Cross-domain and Cross-dimension Learning for Image-to-Graph
Transformers [50.576354045312115]
Direct image-to-graph transformation is a challenging task that solves object detection and relationship prediction in a single model.
We introduce a set of methods enabling cross-domain and cross-dimension transfer learning for image-to-graph transformers.
We demonstrate our method's utility in cross-domain and cross-dimension experiments, where we pretrain our models on 2D satellite images before applying them to vastly different target domains in 2D and 3D.
arXiv Detail & Related papers (2024-03-11T10:48:56Z) - DiffusionPCR: Diffusion Models for Robust Multi-Step Point Cloud
Registration [73.37538551605712]
Point Cloud Registration (PCR) estimates the relative rigid transformation between two point clouds.
We propose formulating PCR as a denoising diffusion probabilistic process, mapping noisy transformations to the ground truth.
Our experiments showcase the effectiveness of our DiffusionPCR, yielding state-of-the-art registration recall rates (95.3%/81.6%) on 3D and 3DLoMatch.
arXiv Detail & Related papers (2023-12-05T18:59:41Z) - Generative Modeling with Phase Stochastic Bridges [49.4474628881673]
Diffusion models (DMs) represent state-of-the-art generative models for continuous inputs.
We introduce a novel generative modeling framework grounded in textbfphase space dynamics
Our framework demonstrates the capability to generate realistic data points at an early stage of dynamics propagation.
arXiv Detail & Related papers (2023-10-11T18:38:28Z) - Dynamic Kernel-Based Adaptive Spatial Aggregation for Learned Image
Compression [63.56922682378755]
We focus on extending spatial aggregation capability and propose a dynamic kernel-based transform coding.
The proposed adaptive aggregation generates kernel offsets to capture valid information in the content-conditioned range to help transform.
Experimental results demonstrate that our method achieves superior rate-distortion performance on three benchmarks compared to the state-of-the-art learning-based methods.
arXiv Detail & Related papers (2023-08-17T01:34:51Z) - Mitigation of Spatial Nonstationarity with Vision Transformers [1.690637178959708]
We show the impact of two common types of geostatistical spatial nonstationarity on deep learning model prediction performance.
We propose the mitigation of such impacts using self-attention (vision transformer) models.
arXiv Detail & Related papers (2022-12-09T02:16:05Z) - PnP-DETR: Towards Efficient Visual Analysis with Transformers [146.55679348493587]
Recently, DETR pioneered the solution vision tasks with transformers, it directly translates the image feature map into the object result.
Recent transformer-based image recognition model andTT show consistent efficiency gain.
arXiv Detail & Related papers (2021-09-15T01:10:30Z) - RockGPT: Reconstructing three-dimensional digital rocks from single
two-dimensional slice from the perspective of video generation [0.0]
We propose a new framework, named RockGPT, to synthesize 3D samples based on a single 2D slice from the perspective of video generation.
In order to obtain diverse reconstructions, the discrete latent codes are modeled using conditional GPT.
We conduct two experiments on five kinds of rocks, and the results demonstrate that RockGPT can produce different kinds of rocks with the same model.
arXiv Detail & Related papers (2021-08-05T00:12:43Z) - Deep-learning-based coupled flow-geomechanics surrogate model for CO$_2$
sequestration [4.635171370680939]
The 3D recurrent R-U-Net model combines deep convolutional and recurrent neural networks to capture the spatial distribution and temporal evolution of saturation, pressure and surface displacement fields.
The surrogate model is trained to predict the 3D CO2 saturation and pressure fields in the storage aquifer, and 2D displacement maps at the Earth's surface.
arXiv Detail & Related papers (2021-05-04T07:34:15Z) - Multi-Scale Neural Networks for to Fluid Flow in 3D Porous Media [0.0]
We develop a general multiscale deep learning model that is able to learn from porous media simulation data.
We enable the evaluation of large images in approximately one second on a single Graphics Processing Unit.
arXiv Detail & Related papers (2021-02-10T23:38:36Z) - Characterizing the Latent Space of Molecular Deep Generative Models with
Persistent Homology Metrics [21.95240820041655]
Variational Autos (VAEs) are generative models in which encoder-decoder network pairs are trained to reconstruct training data distributions.
We propose a method for measuring how well the latent space of deep generative models is able to encode structural and chemical features.
arXiv Detail & Related papers (2020-10-18T13:33:02Z) - Disentangling and Unifying Graph Convolutions for Skeleton-Based Action
Recognition [79.33539539956186]
We propose a simple method to disentangle multi-scale graph convolutions and a unified spatial-temporal graph convolutional operator named G3D.
By coupling these proposals, we develop a powerful feature extractor named MS-G3D based on which our model outperforms previous state-of-the-art methods on three large-scale datasets.
arXiv Detail & Related papers (2020-03-31T11:28:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.