Modeling extra-deep electromagnetic logs using a deep neural network
- URL: http://arxiv.org/abs/2005.08919v3
- Date: Fri, 13 Aug 2021 08:52:57 GMT
- Title: Modeling extra-deep electromagnetic logs using a deep neural network
- Authors: Sergey Alyaev, Mostafa Shahriari, David Pardo, Angel Javier Omella,
David Larsen, Nazanin Jahani, Erich Suter
- Abstract summary: Modern geosteering is heavily dependent on real-time interpretation of deep electromagnetic (EM) measurements.
We present a methodology to construct a deep neural network (DNN) model trained to reproduce a full set of extra-deep EM logs.
The model is trained in a 1D layered environment consisting of up to seven layers with different resistivity values.
- Score: 0.415623340386296
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern geosteering is heavily dependent on real-time interpretation of deep
electromagnetic (EM) measurements. We present a methodology to construct a deep
neural network (DNN) model trained to reproduce a full set of extra-deep EM
logs consisting of 22 measurements per logging position. The model is trained
in a 1D layered environment consisting of up to seven layers with different
resistivity values. A commercial simulator provided by a tool vendor is used to
generate a training dataset. The dataset size is limited because the simulator
provided by the vendor is optimized for sequential execution. Therefore, we
design a training dataset that embraces the geological rules and geosteering
specifics supported by the forward model. We use this dataset to produce an EM
simulator based on a DNN without access to the proprietary information about
the EM tool configuration or the original simulator source code. Despite
employing a relatively small training set size, the resulting DNN forward model
is quite accurate for the considered examples: a multi-layer synthetic case and
a section of a published historical operation from the Goliat Field. The
observed average evaluation time of 0.15 ms per logging position makes it also
suitable for future use as part of evaluation-hungry statistical and/or
Monte-Carlo inversion algorithms within geosteering workflows.
Related papers
- Data-Augmented Predictive Deep Neural Network: Enhancing the extrapolation capabilities of non-intrusive surrogate models [0.5735035463793009]
We propose a new deep learning framework, where kernel dynamic mode decomposition (KDMD) is employed to evolve the dynamics of the latent space generated by the encoder part of a convolutional autoencoder (CAE)
After adding the KDMD-decoder-extrapolated data into the original data set, we train the CAE along with a feed-forward deep neural network using the augmented data.
The trained network can predict future states outside the training time interval at any out-of-training parameter samples.
arXiv Detail & Related papers (2024-10-17T09:26:14Z) - Minimally Supervised Learning using Topological Projections in
Self-Organizing Maps [55.31182147885694]
We introduce a semi-supervised learning approach based on topological projections in self-organizing maps (SOMs)
Our proposed method first trains SOMs on unlabeled data and then a minimal number of available labeled data points are assigned to key best matching units (BMU)
Our results indicate that the proposed minimally supervised model significantly outperforms traditional regression techniques.
arXiv Detail & Related papers (2024-01-12T22:51:48Z) - TRAK: Attributing Model Behavior at Scale [79.56020040993947]
We present TRAK (Tracing with Randomly-trained After Kernel), a data attribution method that is both effective and computationally tractable for large-scale, differenti models.
arXiv Detail & Related papers (2023-03-24T17:56:22Z) - Online Evolutionary Neural Architecture Search for Multivariate
Non-Stationary Time Series Forecasting [72.89994745876086]
This work presents the Online Neuro-Evolution-based Neural Architecture Search (ONE-NAS) algorithm.
ONE-NAS is a novel neural architecture search method capable of automatically designing and dynamically training recurrent neural networks (RNNs) for online forecasting tasks.
Results demonstrate that ONE-NAS outperforms traditional statistical time series forecasting methods.
arXiv Detail & Related papers (2023-02-20T22:25:47Z) - Strategic Geosteeering Workflow with Uncertainty Quantification and Deep
Learning: A Case Study on the Goliat Field [0.0]
This paper presents a practical workflow consisting of offline and online phases.
The offline phase includes training and building of an uncertain prior near-well geo-model.
The online phase uses the flexible iterative ensemble smoother (FlexIES) to perform real-time assimilation of extra-deep electromagnetic data.
arXiv Detail & Related papers (2022-10-27T15:38:26Z) - Fast emulation of density functional theory simulations using
approximate Gaussian processes [0.6445605125467573]
A second statistical model that predicts the simulation output can be used in lieu of the full simulation during model fitting.
We use the emulators to calibrate, in a Bayesian manner, the density functional theory (DFT) model parameters using observed data.
The utility of these DFT models is to make predictions, based on observed data, about the properties of experimentally unobserved nuclides.
arXiv Detail & Related papers (2022-08-24T05:09:36Z) - Learning Large-scale Subsurface Simulations with a Hybrid Graph Network
Simulator [57.57321628587564]
We introduce Hybrid Graph Network Simulator (HGNS) for learning reservoir simulations of 3D subsurface fluid flows.
HGNS consists of a subsurface graph neural network (SGNN) to model the evolution of fluid flows, and a 3D-U-Net to model the evolution of pressure.
Using an industry-standard subsurface flow dataset (SPE-10) with 1.1 million cells, we demonstrate that HGNS is able to reduce the inference time up to 18 times compared to standard subsurface simulators.
arXiv Detail & Related papers (2022-06-15T17:29:57Z) - StorSeismic: A new paradigm in deep learning for seismic processing [0.0]
StorSeismic is a framework for seismic data processing.
We pre-train seismic data, along with synthetically generated ones, in the self-supervised step.
Then, we use the labeled synthetic data to fine-tune the pre-trained network in a supervised fashion to perform various seismic processing tasks.
arXiv Detail & Related papers (2022-04-30T09:55:00Z) - Opportunistic Emulation of Computationally Expensive Simulations via
Deep Learning [9.13837510233406]
We investigate the use of deep neural networks for opportunistic model emulation of APSIM models.
We focus on emulating four important outputs of the APSIM model: runoff, soil_loss, DINrunoff, Nleached.
arXiv Detail & Related papers (2021-08-25T05:57:16Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - Statistical model-based evaluation of neural networks [74.10854783437351]
We develop an experimental setup for the evaluation of neural networks (NNs)
The setup helps to benchmark a set of NNs vis-a-vis minimum-mean-square-error (MMSE) performance bounds.
This allows us to test the effects of training data size, data dimension, data geometry, noise, and mismatch between training and testing conditions.
arXiv Detail & Related papers (2020-11-18T00:33:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.