Towards geological inference with process-based and deep generative modeling, part 1: training on fluvial deposits
- URL: http://arxiv.org/abs/2510.14445v1
- Date: Thu, 16 Oct 2025 08:43:40 GMT
- Title: Towards geological inference with process-based and deep generative modeling, part 1: training on fluvial deposits
- Authors: Guillaume Rongier, Luk Peeters,
- Abstract summary: This study explores whether a generative adversarial network (GAN) can be trained to reproduce fluvial deposits simulated by a process-based model.<n>Developments from the deep-learning community to generate large 2D images are directly transferable to 3D images of fluvial deposits.<n>We show how the deposition time let us monitor and validate the performance of a GAN by checking that its samples honor the law of superposition.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The distribution of resources in the subsurface is deeply linked to the variations of its physical properties. Generative modeling has long been used to predict those physical properties while quantifying the associated uncertainty. But current approaches struggle to properly reproduce geological structures, and fluvial deposits in particular, because of their continuity. This study explores whether a generative adversarial network (GAN) - a type of deep-learning algorithm for generative modeling - can be trained to reproduce fluvial deposits simulated by a process-based model - a more expensive model that mimics geological processes. An ablation study shows that developments from the deep-learning community to generate large 2D images are directly transferable to 3D images of fluvial deposits. Training remains stable, and the generated samples reproduce the non-stationarity and details of the deposits without mode collapse or pure memorization of the training data. Using a process-based model to generate those training data allows us to include valuable properties other than the usual physical properties. We show how the deposition time let us monitor and validate the performance of a GAN by checking that its samples honor the law of superposition. Our work joins a series of previous studies suggesting that GANs are more robust that given credit for, at least for training datasets targeting specific geological structures. Whether this robustness transfers to larger 3D images and multimodal datasets remains to be seen. Exploring how deep generative models can leverage geological principles like the law of superposition shows a lot of promise.
Related papers
- Towards geological inference with process-based and deep generative modeling, part 2: inversion of fluvial deposits and latent-space disentanglement [0.0]
generative adversarial network (GAN) trained to produce fluvial deposits can be inverted to match well and seismic data.<n>Four inversion approaches applied to three test samples with 4, 8, and 20 wells struggled to match these well data.<n>GANs can already handle the tasks required for their integration into geomodeling.
arXiv Detail & Related papers (2025-10-20T12:22:12Z) - Synthetic Geology -- Structural Geology Meets Deep Learning [3.216132991084434]
Building on techniques of generative artificial intelligence applied to voxelated images, we demonstrate a method that extends surface geological data to a three-dimensional subsurface region by training a neural network.<n>We close this data gap in the development of subsurface deep learning by designing a synthetic data-generator process that mimics eons of geological activity.<n>A foundation model trained on such synthetic data is able to generate a 3D image of the subsurface from a previously unseen map of surface topography and geology.
arXiv Detail & Related papers (2025-06-11T20:42:28Z) - Well2Flow: Reconstruction of reservoir states from sparse wells using score-based generative models [0.22499166814992438]
This study investigates the use of score-based generative models for reservoir simulation scenarios.<n>It focuses on reconstructing spatially porous and saturation fields in saline aquifers, inferred from sparse observations at two well locations.<n>It introduces a novel methodology for incorporating physical constraints and well log guidance into generative models, significantly enhancing the accuracy and physical plausibility of the reconstructed subsurface states.
arXiv Detail & Related papers (2025-04-07T20:12:19Z) - Heat Death of Generative Models in Closed-Loop Learning [63.83608300361159]
We study the learning dynamics of generative models that are fed back their own produced content in addition to their original training dataset.
We show that, unless a sufficient amount of external data is introduced at each iteration, any non-trivial temperature leads the model to degenerate.
arXiv Detail & Related papers (2024-04-02T21:51:39Z) - A Phase Transition in Diffusion Models Reveals the Hierarchical Nature of Data [51.03144354630136]
Recent advancements show that diffusion models can generate high-quality images.<n>We study this phenomenon in a hierarchical generative model of data.<n>We find that the backward diffusion process acting after a time $t$ is governed by a phase transition.
arXiv Detail & Related papers (2024-02-26T19:52:33Z) - Surf-D: Generating High-Quality Surfaces of Arbitrary Topologies Using Diffusion Models [83.35835521670955]
Surf-D is a novel method for generating high-quality 3D shapes as Surfaces with arbitrary topologies.
We use the Unsigned Distance Field (UDF) as our surface representation to accommodate arbitrary topologies.
We also propose a new pipeline that employs a point-based AutoEncoder to learn a compact and continuous latent space for accurately encoding UDF.
arXiv Detail & Related papers (2023-11-28T18:56:01Z) - Learning Generative Models for Lumped Rainfall-Runoff Modeling [3.69758875412828]
This study presents a novel generative modeling approach to rainfall-runoff modeling, focusing on the synthesis of realistic daily catchment runoff time series.
Unlike traditional process-based lumped hydrologic models, our approach uses a small number of latent variables to characterize runoff generation processes.
In this study, we trained the generative models using neural networks on data from over 3,000 global catchments and achieved prediction accuracies comparable to current deep learning models.
arXiv Detail & Related papers (2023-09-18T16:07:41Z) - Learning to Jump: Thinning and Thickening Latent Counts for Generative
Modeling [69.60713300418467]
Learning to jump is a general recipe for generative modeling of various types of data.
We demonstrate when learning to jump is expected to perform comparably to learning to denoise, and when it is expected to perform better.
arXiv Detail & Related papers (2023-05-28T05:38:28Z) - Latent Traversals in Generative Models as Potential Flows [113.4232528843775]
We propose to model latent structures with a learned dynamic potential landscape.
Inspired by physics, optimal transport, and neuroscience, these potential landscapes are learned as physically realistic partial differential equations.
Our method achieves both more qualitatively and quantitatively disentangled trajectories than state-of-the-art baselines.
arXiv Detail & Related papers (2023-04-25T15:53:45Z) - How Well Do Sparse Imagenet Models Transfer? [75.98123173154605]
Transfer learning is a classic paradigm by which models pretrained on large "upstream" datasets are adapted to yield good results on "downstream" datasets.
In this work, we perform an in-depth investigation of this phenomenon in the context of convolutional neural networks (CNNs) trained on the ImageNet dataset.
We show that sparse models can match or even outperform the transfer performance of dense models, even at high sparsities.
arXiv Detail & Related papers (2021-11-26T11:58:51Z) - Multi-Branch Deep Radial Basis Function Networks for Facial Emotion
Recognition [80.35852245488043]
We propose a CNN based architecture enhanced with multiple branches formed by radial basis function (RBF) units.
RBF units capture local patterns shared by similar instances using an intermediate representation.
We show it is the incorporation of local information what makes the proposed model competitive.
arXiv Detail & Related papers (2021-09-07T21:05:56Z) - Learning 3D Mineral Prospectivity from 3D Geological Models with
Convolutional Neural Networks: Application to a Structure-controlled
Hydrothermal Gold Deposit [4.647073295455922]
We present a novel method that leverages convolutional neural networks (CNNs) to learn 3D mineral prospectivity from the 3D geological models.
Specifically, to explore the unstructured 3D geological models with the CNNs whose input should be structured, we develop a 2D CNN framework.
This ensures an effective and efficient training of CNNs while allowing the prospective model to approximate the ore-forming process.
arXiv Detail & Related papers (2021-09-02T07:34:10Z) - Seismic Inverse Modeling Method based on Generative Adversarial Network [20.323205728116545]
The paper proposes an inversion modeling method based on GAN consistent with geology, well logs, seismic data.
GAN is a the most promising generation model algorithm that extracts spatial structure and abstract features of training images.
Results show that inversion models conform to observation data and have a low uncertainty under the premise of fast generation.
arXiv Detail & Related papers (2021-06-08T09:14:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.