Zero-Shot Self-Consistency Learning for Seismic Irregular Spatial Sampling Reconstruction
- URL: http://arxiv.org/abs/2411.00911v1
- Date: Fri, 01 Nov 2024 11:59:28 GMT
- Title: Zero-Shot Self-Consistency Learning for Seismic Irregular Spatial Sampling Reconstruction
- Authors: Junheng Peng, Yingtian Liu, Mingwei Wang, Yong Li, Huating Li,
- Abstract summary: We propose a zero-shot self-consistency learning strategy and employ an extremely lightweight network for seismic data reconstruction.
Our method does not require additional datasets and utilizes the correlations among different parts of the data to design a self-consistency learning loss function.
- Score: 6.313946204460284
- License:
- Abstract: Seismic exploration is currently the most important method for understanding subsurface structures. However, due to surface conditions, seismic receivers may not be uniformly distributed along the measurement line, making the entire exploration work difficult to carry out. Previous deep learning methods for reconstructing seismic data often relied on additional datasets for training. While some existing methods do not require extra data, they lack constraints on the reconstruction data, leading to unstable reconstruction performance. In this paper, we proposed a zero-shot self-consistency learning strategy and employed an extremely lightweight network for seismic data reconstruction. Our method does not require additional datasets and utilizes the correlations among different parts of the data to design a self-consistency learning loss function, driving a network with only 90,609 learnable parameters. We applied this method to experiments on the USGS National Petroleum Reserve-Alaska public dataset and the results indicate that our proposed approach achieved good reconstruction results. Additionally, our method also demonstrates a certain degree of noise suppression, which is highly beneficial for large and complex seismic exploration tasks.
Related papers
- RECOVAR: Representation Covariances on Deep Latent Spaces for Seismic Event Detection [0.0]
We develop an unsupervised method for earthquake detection that learns to detect earthquakes from raw waveforms.
The performance is comparable to, and in some cases better than, some state-of-the-art supervised methods.
The approach has the potential to be useful for time series datasets from other domains.
arXiv Detail & Related papers (2024-07-25T21:33:54Z) - Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - FaultSeg Swin-UNETR: Transformer-Based Self-Supervised Pretraining Model
for Fault Recognition [13.339333273943842]
This paper introduces an approach to enhance seismic fault recognition through self-supervised pretraining.
We have employed the Swin Transformer model as the core network and employed the SimMIM pretraining task to capture unique features related to discontinuities in seismic data.
Experimental results demonstrate that our proposed method attains state-of-the-art performance on the Thebe dataset, as measured by the OIS and ODS metrics.
arXiv Detail & Related papers (2023-10-27T08:38:59Z) - Robust Geometry-Preserving Depth Estimation Using Differentiable
Rendering [93.94371335579321]
We propose a learning framework that trains models to predict geometry-preserving depth without requiring extra data or annotations.
Comprehensive experiments underscore our framework's superior generalization capabilities.
Our innovative loss functions empower the model to autonomously recover domain-specific scale-and-shift coefficients.
arXiv Detail & Related papers (2023-09-18T12:36:39Z) - Understanding Reconstruction Attacks with the Neural Tangent Kernel and
Dataset Distillation [110.61853418925219]
We build a stronger version of the dataset reconstruction attack and show how it can provably recover the emphentire training set in the infinite width regime.
We show that both theoretically and empirically, reconstructed images tend to "outliers" in the dataset.
These reconstruction attacks can be used for textitdataset distillation, that is, we can retrain on reconstructed images and obtain high predictive accuracy.
arXiv Detail & Related papers (2023-02-02T21:41:59Z) - Minimizing the Accumulated Trajectory Error to Improve Dataset
Distillation [151.70234052015948]
We propose a novel approach that encourages the optimization algorithm to seek a flat trajectory.
We show that the weights trained on synthetic data are robust against the accumulated errors perturbations with the regularization towards the flat trajectory.
Our method, called Flat Trajectory Distillation (FTD), is shown to boost the performance of gradient-matching methods by up to 4.7%.
arXiv Detail & Related papers (2022-11-20T15:49:11Z) - MDA GAN: Adversarial-Learning-based 3-D Seismic Data Interpolation and
Reconstruction for Complex Missing [6.345037597566314]
Multi-Dimensional Adrial GAN (MDA GAN) is a novel 3-D GAN framework.
MDA GAN employs three discriminators to ensure the consistency of the reconstructed data with the original data distribution in each dimension.
The method achieves reasonable reconstructions for up to 95% of random discrete missing, 100 traces of continuous missing and more complex hybrid missing.
arXiv Detail & Related papers (2022-04-07T04:01:53Z) - Self-Supervised Learning for MRI Reconstruction with a Parallel Network
Training Framework [24.46388892324129]
The proposed method is flexible and can be employed in any existing deep learning-based method.
The effectiveness of the method is evaluated on an open brain MRI dataset.
arXiv Detail & Related papers (2021-09-26T06:09:56Z) - Graph Embedding with Data Uncertainty [113.39838145450007]
spectral-based subspace learning is a common data preprocessing step in many machine learning pipelines.
Most subspace learning methods do not take into consideration possible measurement inaccuracies or artifacts that can lead to data with high uncertainty.
arXiv Detail & Related papers (2020-09-01T15:08:23Z) - Spatiotemporal Modeling of Seismic Images for Acoustic Impedance
Estimation [12.653673008542155]
Machine learning-based inversion usually works in a trace-by-trace fashion on seismic data.
We propose a deep learning-based seismic inversion workflow that models each seismic trace not only temporally but also spatially.
arXiv Detail & Related papers (2020-06-28T00:19:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.