PINNslope: seismic data interpolation and local slope estimation with
physics informed neural networks
- URL: http://arxiv.org/abs/2305.15990v2
- Date: Sat, 9 Dec 2023 12:13:09 GMT
- Title: PINNslope: seismic data interpolation and local slope estimation with
physics informed neural networks
- Authors: Francesco Brandolin, Matteo Ravasi and Tariq Alkhalifah
- Abstract summary: Interpolation of aliased seismic data is a key step in a seismic processing workflow.
We propose to interpolate seismic data by utilizing a physics informed neural network (PINN)
- Score: 2.3895981099137535
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Interpolation of aliased seismic data constitutes a key step in a seismic
processing workflow to obtain high quality velocity models and seismic images.
Building on the idea of describing seismic wavefields as a superposition of
local plane waves, we propose to interpolate seismic data by utilizing a
physics informed neural network (PINN). In the proposed framework, two
feed-forward neural networks are jointly trained using the local plane wave
differential equation as well as the available data as two terms in the
objective function: a primary network assisted by positional encoding is tasked
with reconstructing the seismic data, whilst an auxiliary, smaller network
estimates the associated local slopes. Results on synthetic and field data
validate the effectiveness of the proposed method in handling aliased (coarsely
sampled) data and data with large gaps. Our method compares favorably against a
classic least-squares inversion approach regularized by the local plane-wave
equation as well as a PINN-based approach with a single network and
pre-computed local slopes. We find that introducing a second network to
estimate the local slopes whilst at the same time interpolating the aliased
data enhances the overall reconstruction capabilities and convergence behavior
of the primary network. Moreover, an additional positional encoding layer
embedded as the first layer of the wavefield network confers to the network the
ability to converge faster improving the accuracy of the data term.
Related papers
- A Subsampling Based Neural Network for Spatial Data [0.0]
This article proposes a consistent localized two-layer deep neural network-based regression for spatial data.
We empirically observe the rate of convergence of discrepancy measures between the empirical probability distribution of observed and predicted data, which will become faster for a less smooth spatial surface.
This application is an effective showcase of non-linear spatial regression.
arXiv Detail & Related papers (2024-11-06T02:37:43Z) - A convolutional neural network approach to deblending seismic data [1.5488464287814563]
We present a data-driven deep learning-based method for fast and efficient seismic deblending.
A convolutional neural network (CNN) is designed according to the special character of seismic data.
After training and validation of the network, seismic deblending can be performed in near real time.
arXiv Detail & Related papers (2024-09-12T10:54:35Z) - Mesh Denoising Transformer [104.5404564075393]
Mesh denoising is aimed at removing noise from input meshes while preserving their feature structures.
SurfaceFormer is a pioneering Transformer-based mesh denoising framework.
New representation known as Local Surface Descriptor captures local geometric intricacies.
Denoising Transformer module receives the multimodal information and achieves efficient global feature aggregation.
arXiv Detail & Related papers (2024-05-10T15:27:43Z) - Subspace Perturbation Analysis for Data-Driven Radar Target Localization [20.34399283905663]
We use subspace analysis to benchmark radar target localization accuracy across mismatched scenarios.
We generate comprehensive datasets by randomly placing targets of variable strengths in mismatched constrained areas.
We estimate target locations from these heatmap tensors using a convolutional neural network.
arXiv Detail & Related papers (2023-03-14T21:22:26Z) - An Adaptive and Stability-Promoting Layerwise Training Approach for Sparse Deep Neural Network Architecture [0.0]
This work presents a two-stage adaptive framework for developing deep neural network (DNN) architectures that generalize well for a given training data set.
In the first stage, a layerwise training approach is adopted where a new layer is added each time and trained independently by freezing parameters in the previous layers.
We introduce a epsilon-delta stability-promoting concept as a desirable property for a learning algorithm and show that employing manifold regularization yields a epsilon-delta stability-promoting algorithm.
arXiv Detail & Related papers (2022-11-13T09:51:16Z) - Radar Image Reconstruction from Raw ADC Data using Parametric
Variational Autoencoder with Domain Adaptation [0.0]
We propose a parametrically constrained variational autoencoder, capable of generating the clustered and localized target detections on the range-angle image.
To circumvent the problem of training the proposed neural network on all possible scenarios using real radar data, we propose domain adaptation strategies.
arXiv Detail & Related papers (2022-05-30T16:17:36Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Optimization-Based Separations for Neural Networks [57.875347246373956]
We show that gradient descent can efficiently learn ball indicator functions using a depth 2 neural network with two layers of sigmoidal activations.
This is the first optimization-based separation result where the approximation benefits of the stronger architecture provably manifest in practice.
arXiv Detail & Related papers (2021-12-04T18:07:47Z) - LocalDrop: A Hybrid Regularization for Deep Neural Networks [98.30782118441158]
We propose a new approach for the regularization of neural networks by the local Rademacher complexity called LocalDrop.
A new regularization function for both fully-connected networks (FCNs) and convolutional neural networks (CNNs) has been developed based on the proposed upper bound of the local Rademacher complexity.
arXiv Detail & Related papers (2021-03-01T03:10:11Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - LoRD-Net: Unfolded Deep Detection Network with Low-Resolution Receivers [104.01415343139901]
We propose a deep detector entitled LoRD-Net for recovering information symbols from one-bit measurements.
LoRD-Net has a task-based architecture dedicated to recovering the underlying signal of interest.
We evaluate the proposed receiver architecture for one-bit signal recovery in wireless communications.
arXiv Detail & Related papers (2021-02-05T04:26:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.