RockGPT: Reconstructing three-dimensional digital rocks from single
two-dimensional slice from the perspective of video generation
- URL: http://arxiv.org/abs/2108.03132v1
- Date: Thu, 5 Aug 2021 00:12:43 GMT
- Title: RockGPT: Reconstructing three-dimensional digital rocks from single
two-dimensional slice from the perspective of video generation
- Authors: Qiang Zheng and Dongxiao Zhang
- Abstract summary: We propose a new framework, named RockGPT, to synthesize 3D samples based on a single 2D slice from the perspective of video generation.
In order to obtain diverse reconstructions, the discrete latent codes are modeled using conditional GPT.
We conduct two experiments on five kinds of rocks, and the results demonstrate that RockGPT can produce different kinds of rocks with the same model.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Random reconstruction of three-dimensional (3D) digital rocks from
two-dimensional (2D) slices is crucial for elucidating the microstructure of
rocks and its effects on pore-scale flow in terms of numerical modeling, since
massive samples are usually required to handle intrinsic uncertainties. Despite
remarkable advances achieved by traditional process-based methods, statistical
approaches and recently famous deep learning-based models, few works have
focused on producing several kinds of rocks with one trained model and allowing
the reconstructed samples to satisfy certain given properties, such as
porosity. To fill this gap, we propose a new framework, named RockGPT, which is
composed of VQ-VAE and conditional GPT, to synthesize 3D samples based on a
single 2D slice from the perspective of video generation. The VQ-VAE is
utilized to compress high-dimensional input video, i.e., the sequence of
continuous rock slices, to discrete latent codes and reconstruct them. In order
to obtain diverse reconstructions, the discrete latent codes are modeled using
conditional GPT in an autoregressive manner, while incorporating conditional
information from a given slice, rock type, and porosity. We conduct two
experiments on five kinds of rocks, and the results demonstrate that RockGPT
can produce different kinds of rocks with the same model, and the reconstructed
samples can successfully meet certain specified porosities. In a broader sense,
through leveraging the proposed conditioning scheme, RockGPT constitutes an
effective way to build a general model to produce multiple kinds of rocks
simultaneously that also satisfy user-defined properties.
Related papers
- Constrained Transformer-Based Porous Media Generation to Spatial Distribution of Rock Properties [0.0]
Pore-scale modeling of rock images based on information in 3D micro-computed tomography data is crucial for studying complex subsurface processes.
We propose a two-stage modeling framework that combines a Vector Quantized Variational Autoencoder (VQVAE) and a transformer model for spatial upscaling and arbitrary-size 3D porous media reconstruction.
arXiv Detail & Related papers (2024-10-28T19:03:33Z) - GeoGS3D: Single-view 3D Reconstruction via Geometric-aware Diffusion Model and Gaussian Splatting [81.03553265684184]
We introduce GeoGS3D, a framework for reconstructing detailed 3D objects from single-view images.
We propose a novel metric, Gaussian Divergence Significance (GDS), to prune unnecessary operations during optimization.
Experiments demonstrate that GeoGS3D generates images with high consistency across views and reconstructs high-quality 3D objects.
arXiv Detail & Related papers (2024-03-15T12:24:36Z) - Topology-Aware Latent Diffusion for 3D Shape Generation [20.358373670117537]
We introduce a new generative model that combines latent diffusion with persistent homology to create 3D shapes with high diversity.
Our method involves representing 3D shapes as implicit fields, then employing persistent homology to extract topological features.
arXiv Detail & Related papers (2024-01-31T05:13:53Z) - Explorable Mesh Deformation Subspaces from Unstructured Generative
Models [53.23510438769862]
Deep generative models of 3D shapes often feature continuous latent spaces that can be used to explore potential variations.
We present a method to explore variations among a given set of landmark shapes by constructing a mapping from an easily-navigable 2D exploration space to a subspace of a pre-trained generative model.
arXiv Detail & Related papers (2023-10-11T18:53:57Z) - GVP: Generative Volumetric Primitives [76.95231302205235]
We present Generative Volumetric Primitives (GVP), the first pure 3D generative model that can sample and render 512-resolution images in real-time.
GVP jointly models a number of primitives and their spatial information, both of which can be efficiently generated via a 2D convolutional network.
Experiments on several datasets demonstrate superior efficiency and 3D consistency of GVP over the state-of-the-art.
arXiv Detail & Related papers (2023-03-31T16:50:23Z) - Neural Wavelet-domain Diffusion for 3D Shape Generation [52.038346313823524]
This paper presents a new approach for 3D shape generation, enabling direct generative modeling on a continuous implicit representation in wavelet domain.
Specifically, we propose a compact wavelet representation with a pair of coarse and detail coefficient volumes to implicitly represent 3D shapes via truncated signed distance functions and multi-scale biorthogonal wavelets.
arXiv Detail & Related papers (2022-09-19T02:51:48Z) - Neural Template: Topology-aware Reconstruction and Disentangled
Generation of 3D Meshes [52.038346313823524]
This paper introduces a novel framework called DTNet for 3D mesh reconstruction and generation via Disentangled Topology.
Our method is able to produce high-quality meshes, particularly with diverse topologies, as compared with the state-of-the-art methods.
arXiv Detail & Related papers (2022-06-10T08:32:57Z) - Automated LoD-2 Model Reconstruction from Very-HighResolution
Satellite-derived Digital Surface Model and Orthophoto [1.2691047660244335]
We propose a model-driven method that reconstructs LoD-2 building models following a "decomposition-optimization-fitting" paradigm.
Our proposed method has addressed a few technical caveats over existing methods, resulting in practically high-quality results.
arXiv Detail & Related papers (2021-09-08T19:03:09Z) - Feature Disentanglement in generating three-dimensional structure from
two-dimensional slice with sliceGAN [35.3148116010546]
sliceGAN proposed a new way of using the generative adversarial network (GAN) to capture the micro-structural characteristics of a two-dimensional (2D) slice.
We combine sliceGAN with AdaIN to endow the model with the ability to disentangle the features and control the synthesis.
arXiv Detail & Related papers (2021-05-01T08:29:33Z) - Digital rock reconstruction with user-defined properties using
conditional generative adversarial networks [0.0]
generative adversarial networks (GANs) are becoming increasingly popular since they can reproduce training images with excellent visual and geologic realism.
In this study, we propose conditional GANs for digital rock reconstruction, aiming to reproduce samples not only similar to the real training data, but also satisfying user-specified properties.
arXiv Detail & Related papers (2020-11-29T10:55:58Z) - Primal-Dual Mesh Convolutional Neural Networks [62.165239866312334]
We propose a primal-dual framework drawn from the graph-neural-network literature to triangle meshes.
Our method takes features for both edges and faces of a 3D mesh as input and dynamically aggregates them.
We provide theoretical insights of our approach using tools from the mesh-simplification literature.
arXiv Detail & Related papers (2020-10-23T14:49:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.