Seismic Data Interpolation via Denoising Diffusion Implicit Models with Coherence-corrected Resampling
- URL: http://arxiv.org/abs/2307.04226v3
- Date: Fri, 6 Sep 2024 07:10:03 GMT
- Title: Seismic Data Interpolation via Denoising Diffusion Implicit Models with Coherence-corrected Resampling
- Authors: Xiaoli Wei, Chunxia Zhang, Hongtao Wang, Chengli Tan, Deng Xiong, Baisong Jiang, Jiangshe Zhang, Sang-Woon Kim,
- Abstract summary: Deep learning models such as U-Net often underperform when the training and test missing patterns do not match.
We propose a novel framework that is built upon the multi-modal diffusion models.
Inference phase, we introduce the denoising diffusion implicit model to reduce the number of sampling steps.
To enhance the coherence and continuity between the revealed traces and the missing traces, we propose two strategies.
- Score: 7.755439545030289
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate interpolation of seismic data is crucial for improving the quality of imaging and interpretation. In recent years, deep learning models such as U-Net and generative adversarial networks have been widely applied to seismic data interpolation. However, they often underperform when the training and test missing patterns do not match. To alleviate this issue, here we propose a novel framework that is built upon the multi-modal adaptable diffusion models. In the training phase, following the common wisdom, we use the denoising diffusion probabilistic model with a cosine noise schedule. This cosine global noise configuration improves the use of seismic data by reducing the involvement of excessive noise stages. In the inference phase, we introduce the denoising diffusion implicit model to reduce the number of sampling steps. Different from the conventional unconditional generation, we incorporate the known trace information into each reverse sampling step for achieving conditional interpolation. To enhance the coherence and continuity between the revealed traces and the missing traces, we further propose two strategies, including successive coherence correction and resampling. Coherence correction penalizes the mismatches in the revealed traces, while resampling conducts cyclic interpolation between adjacent reverse steps. Extensive experiments on synthetic and field seismic data validate our model's superiority and demonstrate its generalization capability to various missing patterns and different noise levels with just one training session. In addition, uncertainty quantification and ablation studies are also investigated.
Related papers
- On the Relation Between Linear Diffusion and Power Iteration [42.158089783398616]
We study the generation process as a correlation machine''
We show that low frequencies emerge earlier in the generation process, where the denoising basis vectors are more aligned to the true data with a rate depending on their eigenvalues.
This model allows us to show that the linear diffusion model converges in mean to the leading eigenvector of the underlying data, similarly to the prevalent power iteration method.
arXiv Detail & Related papers (2024-10-16T07:33:12Z) - Denoising as Adaptation: Noise-Space Domain Adaptation for Image Restoration [64.84134880709625]
We show that it is possible to perform domain adaptation via the noise space using diffusion models.
In particular, by leveraging the unique property of how auxiliary conditional inputs influence the multi-step denoising process, we derive a meaningful diffusion loss.
We present crucial strategies such as channel-shuffling layer and residual-swapping contrastive learning in the diffusion model.
arXiv Detail & Related papers (2024-06-26T17:40:30Z) - Inference Stage Denoising for Undersampled MRI Reconstruction [13.8086726938161]
Reconstruction of magnetic resonance imaging (MRI) data has been positively affected by deep learning.
A key challenge remains: to improve generalisation to distribution shifts between the training and testing data.
arXiv Detail & Related papers (2024-02-12T12:50:10Z) - Blue noise for diffusion models [50.99852321110366]
We introduce a novel and general class of diffusion models taking correlated noise within and across images into account.
Our framework allows introducing correlation across images within a single mini-batch to improve gradient flow.
We perform both qualitative and quantitative evaluations on a variety of datasets using our method.
arXiv Detail & Related papers (2024-02-07T14:59:25Z) - One More Step: A Versatile Plug-and-Play Module for Rectifying Diffusion
Schedule Flaws and Enhancing Low-Frequency Controls [77.42510898755037]
One More Step (OMS) is a compact network that incorporates an additional simple yet effective step during inference.
OMS elevates image fidelity and harmonizes the dichotomy between training and inference, while preserving original model parameters.
Once trained, various pre-trained diffusion models with the same latent domain can share the same OMS module.
arXiv Detail & Related papers (2023-11-27T12:02:42Z) - DiffSED: Sound Event Detection with Denoising Diffusion [70.18051526555512]
We reformulate the SED problem by taking a generative learning perspective.
Specifically, we aim to generate sound temporal boundaries from noisy proposals in a denoising diffusion process.
During training, our model learns to reverse the noising process by converting noisy latent queries to the groundtruth versions.
arXiv Detail & Related papers (2023-08-14T17:29:41Z) - A Geometric Perspective on Diffusion Models [57.27857591493788]
We inspect the ODE-based sampling of a popular variance-exploding SDE.
We establish a theoretical relationship between the optimal ODE-based sampling and the classic mean-shift (mode-seeking) algorithm.
arXiv Detail & Related papers (2023-05-31T15:33:16Z) - The role of noise in denoising models for anomaly detection in medical
images [62.0532151156057]
Pathological brain lesions exhibit diverse appearance in brain images.
Unsupervised anomaly detection approaches have been proposed using only normal data for training.
We show that optimization of the spatial resolution and magnitude of the noise improves the performance of different model training regimes.
arXiv Detail & Related papers (2023-01-19T21:39:38Z) - Empowering Diffusion Models on the Embedding Space for Text Generation [38.664533078347304]
We study the optimization challenges encountered with both the embedding space and the denoising model.
Data distribution is learnable for embeddings, which may lead to the collapse of the embedding space and unstable training.
Based on the above analysis, we propose Difformer, an embedding diffusion model based on Transformer.
arXiv Detail & Related papers (2022-12-19T12:44:25Z) - From Denoising Diffusions to Denoising Markov Models [38.33676858989955]
Denoising diffusions are state-of-the-art generative models exhibiting remarkable empirical performance.
We propose a unifying framework generalising this approach to a wide class of spaces and leading to an original extension of score matching.
arXiv Detail & Related papers (2022-11-07T14:34:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.