Diffusion priors enhanced velocity model building from time-lag images using a neural operator
- URL: http://arxiv.org/abs/2512.23375v1
- Date: Mon, 29 Dec 2025 11:12:26 GMT
- Title: Diffusion priors enhanced velocity model building from time-lag images using a neural operator
- Authors: Xiao Ma, Mohammad Hasyim Taufik, Tariq Alkhalifah,
- Abstract summary: We propose a novel framework that combines generative models with neural operators to obtain high resolution velocity models efficiently.<n>Both synthetic and field data experiments demonstrate the effectiveness of the proposed generative neural operator based velocity model building approach.
- Score: 6.998175750408805
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Velocity model building serves as a crucial component for achieving high precision subsurface imaging. However, conventional velocity model building methods are often computationally expensive and time consuming. In recent years, with the rapid advancement of deep learning, particularly the success of generative models and neural operators, deep learning based approaches that integrate data and their statistics have attracted increasing attention in addressing the limitations of traditional methods. In this study, we propose a novel framework that combines generative models with neural operators to obtain high resolution velocity models efficiently. Within this workflow, the neural operator functions as a forward mapping operator to rapidly generate time lag reverse time migration (RTM) extended images from the true and migration velocity models. In this framework, the neural operator is acting as a surrogate for modeling followed by migration, which uses the true and migration velocities, respectively. The trained neural operator is then employed, through automatic differentiation, to gradually update the migration velocity placed in the true velocity input channel with high resolution components so that the output of the network matches the time lag images of observed data obtained using the migration velocity. By embedding a generative model, trained on a high-resolution velocity model distribution, which corresponds to the true velocity model distribution used to train the neural operator, as a regularizer, the resulting predictions are cleaner with higher resolution information. Both synthetic and field data experiments demonstrate the effectiveness of the proposed generative neural operator based velocity model building approach.
Related papers
- Langevin Flows for Modeling Neural Latent Dynamics [81.81271685018284]
We introduce LangevinFlow, a sequential Variational Auto-Encoder where the time evolution of latent variables is governed by the underdamped Langevin equation.<n>Our approach incorporates physical priors -- such as inertia, damping, a learned potential function, and forces -- to represent both autonomous and non-autonomous processes in neural systems.<n>Our method outperforms state-of-the-art baselines on synthetic neural populations generated by a Lorenz attractor.
arXiv Detail & Related papers (2025-07-15T17:57:48Z) - DiffPINN: Generative diffusion-initialized physics-informed neural networks for accelerating seismic wavefield representation [3.069335774032178]
Physics-informed neural networks (PINNs) offer a powerful framework for seismic wavefield modeling.<n>PINNs typically require time-consuming retraining when applied to different velocity models.<n>We introduce a latent diffusion-based strategy for rapid and effective PINN initialization.
arXiv Detail & Related papers (2025-05-31T08:41:06Z) - Mean Flows for One-step Generative Modeling [64.4997821467102]
We propose a principled and effective framework for one-step generative modeling.<n>A well-defined identity between average and instantaneous velocities is derived and used to guide neural network training.<n>Our method, termed the MeanFlow model, is self-contained and requires no pre-training, distillation, or curriculum learning.
arXiv Detail & Related papers (2025-05-19T17:59:42Z) - Implicit factorized transformer approach to fast prediction of turbulent channel flows [6.70175842351963]
We introduce a modified implicit factorized transformer (IFactFormer-m) model which replaces the original chained factorized attention with parallel factorized attention.<n>The IFactFormer-m model successfully performs long-term predictions for turbulent channel flow.
arXiv Detail & Related papers (2024-12-25T09:05:14Z) - Machine learning-enabled velocity model building with uncertainty quantification [0.41942958779358674]
Accurately characterizing migration velocity models is crucial for a wide range of geophysical applications.
Traditional velocity model building methods are powerful but often struggle with the inherent complexities of the inverse problem.
We propose a scalable methodology that integrates generative modeling, in the form of Diffusion networks, with physics-informed summary statistics.
arXiv Detail & Related papers (2024-11-11T01:36:48Z) - Propagating the prior from shallow to deep with a pre-trained velocity-model Generative Transformer network [2.499907423888049]
Building subsurface velocity models is essential to our goals in utilizing seismic data for exploration and monitoring.
We introduce VelocityGPT, a novel implementation that utilizes Transformer decoders trained autoregressively to generate a velocity model from shallow subsurface to deep.
We demonstrate the effectiveness of VelocityGPT as a promising approach in generative model applications for seismic velocity model building.
arXiv Detail & Related papers (2024-08-19T07:56:43Z) - A-SDM: Accelerating Stable Diffusion through Redundancy Removal and
Performance Optimization [54.113083217869516]
In this work, we first explore the computational redundancy part of the network.
We then prune the redundancy blocks of the model and maintain the network performance.
Thirdly, we propose a global-regional interactive (GRI) attention to speed up the computationally intensive attention part.
arXiv Detail & Related papers (2023-12-24T15:37:47Z) - Fast Sampling of Diffusion Models via Operator Learning [74.37531458470086]
We use neural operators, an efficient method to solve the probability flow differential equations, to accelerate the sampling process of diffusion models.
Compared to other fast sampling methods that have a sequential nature, we are the first to propose a parallel decoding method.
We show our method achieves state-of-the-art FID of 3.78 for CIFAR-10 and 7.83 for ImageNet-64 in the one-model-evaluation setting.
arXiv Detail & Related papers (2022-11-24T07:30:27Z) - An advanced spatio-temporal convolutional recurrent neural network for
storm surge predictions [73.4962254843935]
We study the capability of artificial neural network models to emulate storm surge based on the storm track/size/intensity history.
This study presents a neural network model that can predict storm surge, informed by a database of synthetic storm simulations.
arXiv Detail & Related papers (2022-04-18T23:42:18Z) - Seismic wave propagation and inversion with Neural Operators [7.296366040398878]
We develop a prototype framework for learning general solutions using a recently developed machine learning paradigm called Neural Operator.
A trained Neural Operator can compute a solution in negligible time for any velocity structure or source location.
We illustrate the method with the 2D acoustic wave equation and demonstrate the method's applicability to seismic tomography.
arXiv Detail & Related papers (2021-08-11T19:17:39Z) - STAR: Sparse Transformer-based Action Recognition [61.490243467748314]
This work proposes a novel skeleton-based human action recognition model with sparse attention on the spatial dimension and segmented linear attention on the temporal dimension of data.
Experiments show that our model can achieve comparable performance while utilizing much less trainable parameters and achieve high speed in training and inference.
arXiv Detail & Related papers (2021-07-15T02:53:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.