Propagating the prior from shallow to deep with a pre-trained velocity-model Generative Transformer network
- URL: http://arxiv.org/abs/2408.09767v1
- Date: Mon, 19 Aug 2024 07:56:43 GMT
- Title: Propagating the prior from shallow to deep with a pre-trained velocity-model Generative Transformer network
- Authors: Randy Harsuko, Shijun Cheng, Tariq Alkhalifah,
- Abstract summary: Building subsurface velocity models is essential to our goals in utilizing seismic data for exploration and monitoring.
We introduce VelocityGPT, a novel implementation that utilizes Transformer decoders trained autoregressively to generate a velocity model from shallow subsurface to deep.
We demonstrate the effectiveness of VelocityGPT as a promising approach in generative model applications for seismic velocity model building.
- Score: 2.499907423888049
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Building subsurface velocity models is essential to our goals in utilizing seismic data for Earth discovery and exploration, as well as monitoring. With the dawn of machine learning, these velocity models (or, more precisely, their distribution) can be stored accurately and efficiently in a generative model. These stored velocity model distributions can be utilized to regularize or quantify uncertainties in inverse problems, like full waveform inversion. However, most generators, like normalizing flows or diffusion models, treat the image (velocity model) uniformly, disregarding spatial dependencies and resolution changes with respect to the observation locations. To address this weakness, we introduce VelocityGPT, a novel implementation that utilizes Transformer decoders trained autoregressively to generate a velocity model from shallow subsurface to deep. Owing to the fact that seismic data are often recorded on the Earth's surface, a top-down generator can utilize the inverted information in the shallow as guidance (prior) to generating the deep. To facilitate the implementation, we use an additional network to compress the velocity model. We also inject prior information, like well or structure (represented by a migration image) to generate the velocity model. Using synthetic data, we demonstrate the effectiveness of VelocityGPT as a promising approach in generative model applications for seismic velocity model building.
Related papers
- High Resolution Seismic Waveform Generation using Denoising Diffusion [3.5046784866523932]
This study introduces a novel, efficient, and scalable generative model for high-frequency seismic waveform generation.
A spectrogram representation of seismic waveform data is reduced to a lower-dimensional submanifold via an autoencoder.
A state-of-the-art diffusion model is trained to generate this latent representation, conditioned on key input parameters.
arXiv Detail & Related papers (2024-10-25T07:01:48Z) - Physics-integrated generative modeling using attentive planar normalizing flow based variational autoencoder [0.0]
We aim to improve the fidelity of reconstruction and to noise in the physics integrated generative model.
To improve the robustness of generative model against noise injected in the model, we propose a modification in the encoder part of the normalizing flow based VAE.
arXiv Detail & Related papers (2024-04-18T15:38:14Z) - Learning Robust Precipitation Forecaster by Temporal Frame Interpolation [65.5045412005064]
We develop a robust precipitation forecasting model that demonstrates resilience against spatial-temporal discrepancies.
Our approach has led to significant improvements in forecasting precision, culminating in our model securing textit1st place in the transfer learning leaderboard of the textitWeather4cast'23 competition.
arXiv Detail & Related papers (2023-11-30T08:22:08Z) - Generative Modeling with Phase Stochastic Bridges [49.4474628881673]
Diffusion models (DMs) represent state-of-the-art generative models for continuous inputs.
We introduce a novel generative modeling framework grounded in textbfphase space dynamics
Our framework demonstrates the capability to generate realistic data points at an early stage of dynamics propagation.
arXiv Detail & Related papers (2023-10-11T18:38:28Z) - Learning Large-scale Subsurface Simulations with a Hybrid Graph Network
Simulator [57.57321628587564]
We introduce Hybrid Graph Network Simulator (HGNS) for learning reservoir simulations of 3D subsurface fluid flows.
HGNS consists of a subsurface graph neural network (SGNN) to model the evolution of fluid flows, and a 3D-U-Net to model the evolution of pressure.
Using an industry-standard subsurface flow dataset (SPE-10) with 1.1 million cells, we demonstrate that HGNS is able to reduce the inference time up to 18 times compared to standard subsurface simulators.
arXiv Detail & Related papers (2022-06-15T17:29:57Z) - Physics-Inspired Temporal Learning of Quadrotor Dynamics for Accurate
Model Predictive Trajectory Tracking [76.27433308688592]
Accurately modeling quadrotor's system dynamics is critical for guaranteeing agile, safe, and stable navigation.
We present a novel Physics-Inspired Temporal Convolutional Network (PI-TCN) approach to learning quadrotor's system dynamics purely from robot experience.
Our approach combines the expressive power of sparse temporal convolutions and dense feed-forward connections to make accurate system predictions.
arXiv Detail & Related papers (2022-06-07T13:51:35Z) - An advanced spatio-temporal convolutional recurrent neural network for
storm surge predictions [73.4962254843935]
We study the capability of artificial neural network models to emulate storm surge based on the storm track/size/intensity history.
This study presents a neural network model that can predict storm surge, informed by a database of synthetic storm simulations.
arXiv Detail & Related papers (2022-04-18T23:42:18Z) - STAR: Sparse Transformer-based Action Recognition [61.490243467748314]
This work proposes a novel skeleton-based human action recognition model with sparse attention on the spatial dimension and segmented linear attention on the temporal dimension of data.
Experiments show that our model can achieve comparable performance while utilizing much less trainable parameters and achieve high speed in training and inference.
arXiv Detail & Related papers (2021-07-15T02:53:11Z) - Data-driven Full-waveform Inversion Surrogate using Conditional
Generative Adversarial Networks [0.0]
Full-waveform inversion (FWI) velocity modeling is an iterative advanced technique that provides an accurate and detailed velocity field model.
In this study, we propose a method of generating velocity field models, as detailed as those obtained through FWI, using a conditional generative adversarial network (cGAN) with multiple inputs.
arXiv Detail & Related papers (2021-04-30T21:41:24Z) - Incorporating Kinematic Wave Theory into a Deep Learning Method for
High-Resolution Traffic Speed Estimation [3.0969191504482243]
We propose a kinematic wave based Deep Convolutional Neural Network (Deep CNN) to estimate high resolution traffic speed dynamics from sparse probe vehicle trajectories.
We introduce two key approaches that allow us to incorporate kinematic wave theory principles to improve the robustness of existing learning-based estimation methods.
arXiv Detail & Related papers (2021-02-04T21:51:25Z) - A Spatio-temporal Transformer for 3D Human Motion Prediction [39.31212055504893]
We propose a Transformer-based architecture for the task of generative modelling of 3D human motion.
We empirically show that this effectively learns the underlying motion dynamics and reduces error accumulation over time observed in auto-gressive models.
arXiv Detail & Related papers (2020-04-18T19:49:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.