Anti-Aliasing Add-On for Deep Prior Seismic Data Interpolation
- URL: http://arxiv.org/abs/2101.11361v1
- Date: Wed, 27 Jan 2021 12:46:58 GMT
- Title: Anti-Aliasing Add-On for Deep Prior Seismic Data Interpolation
- Authors: Francesco Picetti, Vincenzo Lipari, Paolo Bestagini, Stefano Tubaro
- Abstract summary: We propose to improve Deep Prior inversion by adding a directional Laplacian as regularization term to the problem.
We show that our results are less prone to aliasing also in presence of noisy and corrupted data.
- Score: 20.336981948463702
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data interpolation is a fundamental step in any seismic processing workflow.
Among machine learning techniques recently proposed to solve data interpolation
as an inverse problem, Deep Prior paradigm aims at employing a convolutional
neural network to capture priors on the data in order to regularize the
inversion. However, this technique lacks of reconstruction precision when
interpolating highly decimated data due to the presence of aliasing. In this
work, we propose to improve Deep Prior inversion by adding a directional
Laplacian as regularization term to the problem. This regularizer drives the
optimization towards solutions that honor the slopes estimated from the
interpolated data low frequencies. We provide some numerical examples to
showcase the methodology devised in this manuscript, showing that our results
are less prone to aliasing also in presence of noisy and corrupted data.
Related papers
- Deep learning-based shot-domain seismic deblending [1.6411821807321063]
We make use of unblended shot gathers acquired at the end of each sail line.
By manually blending these data we obtain training data with good control of the ground truth.
We train a deep neural network using multi-channel inputs that include adjacent blended shot gathers.
arXiv Detail & Related papers (2024-09-13T07:32:31Z) - Hierarchical Features Matter: A Deep Exploration of GAN Priors for Improved Dataset Distillation [51.44054828384487]
We propose a novel parameterization method dubbed Hierarchical Generative Latent Distillation (H-GLaD)
This method systematically explores hierarchical layers within the generative adversarial networks (GANs)
In addition, we introduce a novel class-relevant feature distance metric to alleviate the computational burden associated with synthetic dataset evaluation.
arXiv Detail & Related papers (2024-06-09T09:15:54Z) - Single-Shot Plug-and-Play Methods for Inverse Problems [24.48841512811108]
Plug-and-Play priors in inverse problems have become increasingly prominent in recent years.
Existing models predominantly rely on pre-trained denoisers using large datasets.
In this work, we introduce Single-Shot perturbative methods, shifting the focus to solving inverse problems with minimal data.
arXiv Detail & Related papers (2023-11-22T20:31:33Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Refining Amortized Posterior Approximations using Gradient-Based Summary
Statistics [0.9176056742068814]
We present an iterative framework to improve the amortized approximations of posterior distributions in the context of inverse problems.
We validate our method in a controlled setting by applying it to a stylized problem, and observe improved posterior approximations with each iteration.
arXiv Detail & Related papers (2023-05-15T15:47:19Z) - Optimal Algorithms for the Inhomogeneous Spiked Wigner Model [89.1371983413931]
We derive an approximate message-passing algorithm (AMP) for the inhomogeneous problem.
We identify in particular the existence of a statistical-to-computational gap where known algorithms require a signal-to-noise ratio bigger than the information-theoretic threshold to perform better than random.
arXiv Detail & Related papers (2023-02-13T19:57:17Z) - MAPS: A Noise-Robust Progressive Learning Approach for Source-Free
Domain Adaptive Keypoint Detection [76.97324120775475]
Cross-domain keypoint detection methods always require accessing the source data during adaptation.
This paper considers source-free domain adaptive keypoint detection, where only the well-trained source model is provided to the target domain.
arXiv Detail & Related papers (2023-02-09T12:06:08Z) - Minimizing the Accumulated Trajectory Error to Improve Dataset
Distillation [151.70234052015948]
We propose a novel approach that encourages the optimization algorithm to seek a flat trajectory.
We show that the weights trained on synthetic data are robust against the accumulated errors perturbations with the regularization towards the flat trajectory.
Our method, called Flat Trajectory Distillation (FTD), is shown to boost the performance of gradient-matching methods by up to 4.7%.
arXiv Detail & Related papers (2022-11-20T15:49:11Z) - Shuffled linear regression through graduated convex relaxation [12.614901374282868]
The shuffled linear regression problem aims to recover linear relationships in datasets where the correspondence between input and output is unknown.
This problem arises in a wide range of applications including survey data.
We propose a novel optimization algorithm for shuffled linear regression based on a posterior-maximizing objective function.
arXiv Detail & Related papers (2022-09-30T17:33:48Z) - Deep Preconditioners and their application to seismic wavefield
processing [0.0]
Sparsity-promoting inversion, coupled with fixed-basis sparsifying transforms, represent the go-to approach for many processing tasks.
We propose to train an AutoEncoder network to learn a direct mapping between the input seismic data and a representative latent manifold.
The trained decoder is subsequently used as a nonlinear preconditioner for the physics-driven inverse problem at hand.
arXiv Detail & Related papers (2022-07-20T14:25:32Z) - Invariance Learning in Deep Neural Networks with Differentiable Laplace
Approximations [76.82124752950148]
We develop a convenient gradient-based method for selecting the data augmentation.
We use a differentiable Kronecker-factored Laplace approximation to the marginal likelihood as our objective.
arXiv Detail & Related papers (2022-02-22T02:51:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.