A prior regularized full waveform inversion using generative diffusion
models
- URL: http://arxiv.org/abs/2306.12776v1
- Date: Thu, 22 Jun 2023 10:10:34 GMT
- Title: A prior regularized full waveform inversion using generative diffusion
models
- Authors: Fu Wang, Xinquan Huang, Tariq Alkhalifah
- Abstract summary: Full waveform inversion (FWI) has the potential to provide high-resolution subsurface model estimations.
Due to limitations in observation, e.g., regional noise, limited shots or receivers, and band-limited data, it is hard to obtain the desired high-resolution model with FWI.
We propose a new paradigm for FWI regularized by generative diffusion models.
- Score: 0.5156484100374059
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Full waveform inversion (FWI) has the potential to provide high-resolution
subsurface model estimations. However, due to limitations in observation, e.g.,
regional noise, limited shots or receivers, and band-limited data, it is hard
to obtain the desired high-resolution model with FWI. To address this
challenge, we propose a new paradigm for FWI regularized by generative
diffusion models. Specifically, we pre-train a diffusion model in a fully
unsupervised manner on a prior velocity model distribution that represents our
expectations of the subsurface and then adapt it to the seismic observations by
incorporating the FWI into the sampling process of the generative diffusion
models. What makes diffusion models uniquely appropriate for such an
implementation is that the generative process retains the form and dimensions
of the velocity model. Numerical examples demonstrate that our method can
outperform the conventional FWI with only negligible additional computational
cost. Even in cases of very sparse observations or observations with strong
noise, the proposed method could still reconstruct a high-quality subsurface
model. Thus, we can incorporate our prior expectations of the solutions in an
efficient manner. We further test this approach on field data, which
demonstrates the effectiveness of the proposed method.
Related papers
- Learning Diffusion Priors from Observations by Expectation Maximization [6.224769485481242]
We present a novel expectation-maximization algorithm for training diffusion models from incomplete and noisy observations only.
As part of our method, we propose and motivate a new posterior sampling scheme for unconditional diffusion models.
arXiv Detail & Related papers (2024-05-22T15:04:06Z) - Neural Flow Diffusion Models: Learnable Forward Process for Improved Diffusion Modelling [2.1779479916071067]
We introduce a novel framework that enhances diffusion models by supporting a broader range of forward processes.
We also propose a novel parameterization technique for learning the forward process.
Results underscore NFDM's versatility and its potential for a wide range of applications.
arXiv Detail & Related papers (2024-04-19T15:10:54Z) - Boosting Diffusion Models with Moving Average Sampling in Frequency Domain [101.43824674873508]
Diffusion models rely on the current sample to denoise the next one, possibly resulting in denoising instability.
In this paper, we reinterpret the iterative denoising process as model optimization and leverage a moving average mechanism to ensemble all the prior samples.
We name the complete approach "Moving Average Sampling in Frequency domain (MASF)"
arXiv Detail & Related papers (2024-03-26T16:57:55Z) - MG-TSD: Multi-Granularity Time Series Diffusion Models with Guided Learning Process [26.661721555671626]
We introduce a novel Multi-Granularity Time Series (MG-TSD) model, which achieves state-of-the-art predictive performance.
Our approach does not rely on additional external data, making it versatile and applicable across various domains.
arXiv Detail & Related papers (2024-03-09T01:15:03Z) - Generative Modeling with Phase Stochastic Bridges [49.4474628881673]
Diffusion models (DMs) represent state-of-the-art generative models for continuous inputs.
We introduce a novel generative modeling framework grounded in textbfphase space dynamics
Our framework demonstrates the capability to generate realistic data points at an early stage of dynamics propagation.
arXiv Detail & Related papers (2023-10-11T18:38:28Z) - Stage-by-stage Wavelet Optimization Refinement Diffusion Model for
Sparse-View CT Reconstruction [14.037398189132468]
We present an innovative approach named the Stage-by-stage Wavelet Optimization Refinement Diffusion (SWORD) model for sparse-view CT reconstruction.
Specifically, we establish a unified mathematical model integrating low-frequency and high-frequency generative models, achieving the solution with optimization procedure.
Our method rooted in established optimization theory, comprising three distinct stages, including low-frequency generation, high-frequency refinement and domain transform.
arXiv Detail & Related papers (2023-08-30T10:48:53Z) - Semi-Implicit Denoising Diffusion Models (SIDDMs) [50.30163684539586]
Existing models such as Denoising Diffusion Probabilistic Models (DDPM) deliver high-quality, diverse samples but are slowed by an inherently high number of iterative steps.
We introduce a novel approach that tackles the problem by matching implicit and explicit factors.
We demonstrate that our proposed method obtains comparable generative performance to diffusion-based models and vastly superior results to models with a small number of sampling steps.
arXiv Detail & Related papers (2023-06-21T18:49:22Z) - Diffusion Models are Minimax Optimal Distribution Estimators [49.47503258639454]
We provide the first rigorous analysis on approximation and generalization abilities of diffusion modeling.
We show that when the true density function belongs to the Besov space and the empirical score matching loss is properly minimized, the generated data distribution achieves the nearly minimax optimal estimation rates.
arXiv Detail & Related papers (2023-03-03T11:31:55Z) - ShiftDDPMs: Exploring Conditional Diffusion Models by Shifting Diffusion
Trajectories [144.03939123870416]
We propose a novel conditional diffusion model by introducing conditions into the forward process.
We use extra latent space to allocate an exclusive diffusion trajectory for each condition based on some shifting rules.
We formulate our method, which we call textbfShiftDDPMs, and provide a unified point of view on existing related methods.
arXiv Detail & Related papers (2023-02-05T12:48:21Z) - Fast Inference in Denoising Diffusion Models via MMD Finetuning [23.779985842891705]
We present MMD-DDM, a novel method for fast sampling of diffusion models.
Our approach is based on the idea of using the Maximum Mean Discrepancy (MMD) to finetune the learned distribution with a given budget of timesteps.
Our findings show that the proposed method is able to produce high-quality samples in a fraction of the time required by widely-used diffusion models.
arXiv Detail & Related papers (2023-01-19T09:48:07Z) - A Survey on Generative Diffusion Model [75.93774014861978]
Diffusion models are an emerging class of deep generative models.
They have certain limitations, including a time-consuming iterative generation process and confinement to high-dimensional Euclidean space.
This survey presents a plethora of advanced techniques aimed at enhancing diffusion models.
arXiv Detail & Related papers (2022-09-06T16:56:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.