A stable deep adversarial learning approach for geological facies
generation
- URL: http://arxiv.org/abs/2305.13318v3
- Date: Mon, 4 Mar 2024 14:31:05 GMT
- Title: A stable deep adversarial learning approach for geological facies
generation
- Authors: Ferdinand Bhavsar, Nicolas Desassis, Fabien Ors, Thomas Romary
- Abstract summary: Deep generative learning is a promising approach to overcome the limitations of traditional geostatistical simulation models.
This research aims to investigate the application of generative adversarial networks and deep variational inference for conditionally meandering channels in underground volumes.
- Score: 32.97208255533144
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The simulation of geological facies in an unobservable volume is essential in
various geoscience applications. Given the complexity of the problem, deep
generative learning is a promising approach to overcome the limitations of
traditional geostatistical simulation models, in particular their lack of
physical realism. This research aims to investigate the application of
generative adversarial networks and deep variational inference for
conditionally simulating meandering channels in underground volumes. In this
paper, we review the generative deep learning approaches, in particular the
adversarial ones and the stabilization techniques that aim to facilitate their
training. The proposed approach is tested on 2D and 3D simulations generated by
the stochastic process-based model Flumy. Morphological metrics are utilized to
compare our proposed method with earlier iterations of generative adversarial
networks. The results indicate that by utilizing recent stabilization
techniques, generative adversarial networks can efficiently sample from target
data distributions. Moreover, we demonstrate the ability to simulate
conditioned simulations through the latent variable model property of the
proposed approach.
Related papers
- You are out of context! [0.0]
New data can act as forces stretching, compressing, or twisting the geometric relationships learned by a model.
We propose a novel drift detection methodology for machine learning (ML) models based on the concept of ''deformation'' in the vector space representation of data.
arXiv Detail & Related papers (2024-11-04T10:17:43Z) - Model-based Policy Optimization using Symbolic World Model [46.42871544295734]
The application of learning-based control methods in robotics presents significant challenges.
One is that model-free reinforcement learning algorithms use observation data with low sample efficiency.
We suggest approximating transition dynamics with symbolic expressions, which are generated via symbolic regression.
arXiv Detail & Related papers (2024-07-18T13:49:21Z) - Model Evaluation and Anomaly Detection in Temporal Complex Networks using Deep Learning Methods [0.0]
This paper proposes an automatic approach based on deep learning to handle the issue of results evaluation for temporal network models.
In addition to an evaluation method, the proposed method can also be used for anomaly detection in evolving networks.
arXiv Detail & Related papers (2024-06-15T09:19:09Z) - Modeling Randomly Observed Spatiotemporal Dynamical Systems [7.381752536547389]
Currently available neural network-based modeling approaches fall short when faced with data collected randomly over time and space.
In response, we developed a new method that effectively handles such randomly sampled data.
Our model integrates techniques from amortized variational inference, neural differential equations, neural point processes, and implicit neural representations to predict both the dynamics of the system and the timings and locations of future observations.
arXiv Detail & Related papers (2024-06-01T09:03:32Z) - Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Distributed Bayesian Learning of Dynamic States [65.7870637855531]
The proposed algorithm is a distributed Bayesian filtering task for finite-state hidden Markov models.
It can be used for sequential state estimation, as well as for modeling opinion formation over social networks under dynamic environments.
arXiv Detail & Related papers (2022-12-05T19:40:17Z) - Guaranteed Conservation of Momentum for Learning Particle-based Fluid
Dynamics [96.9177297872723]
We present a novel method for guaranteeing linear momentum in learned physics simulations.
We enforce conservation of momentum with a hard constraint, which we realize via antisymmetrical continuous convolutional layers.
In combination, the proposed method allows us to increase the physical accuracy of the learned simulator substantially.
arXiv Detail & Related papers (2022-10-12T09:12:59Z) - Likelihood-Free Inference in State-Space Models with Unknown Dynamics [71.94716503075645]
We introduce a method for inferring and predicting latent states in state-space models where observations can only be simulated, and transition dynamics are unknown.
We propose a way of doing likelihood-free inference (LFI) of states and state prediction with a limited number of simulations.
arXiv Detail & Related papers (2021-11-02T12:33:42Z) - Deep Bayesian Active Learning for Accelerating Stochastic Simulation [74.58219903138301]
Interactive Neural Process (INP) is a deep active learning framework for simulations and with active learning approaches.
For active learning, we propose a novel acquisition function, Latent Information Gain (LIG), calculated in the latent space of NP based models.
The results demonstrate STNP outperforms the baselines in the learning setting and LIG achieves the state-of-the-art for active learning.
arXiv Detail & Related papers (2021-06-05T01:31:51Z) - Amortized Bayesian Inference for Models of Cognition [0.1529342790344802]
Recent advances in simulation-based inference using specialized neural network architectures circumvent many previous problems of approximate Bayesian computation.
We provide a general introduction to amortized Bayesian parameter estimation and model comparison.
arXiv Detail & Related papers (2020-05-08T08:12:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.