ContrasInver: Ultra-Sparse Label Semi-supervised Regression for
Multi-dimensional Seismic Inversion
- URL: http://arxiv.org/abs/2302.06441v3
- Date: Mon, 17 Jul 2023 14:13:25 GMT
- Title: ContrasInver: Ultra-Sparse Label Semi-supervised Regression for
Multi-dimensional Seismic Inversion
- Authors: Yimin Dou, Kewen Li, Wenjun Lv, Timing Li, Hongjie Duan, Zhifeng Xu
- Abstract summary: ContrasInver is a method that achieves seismic inversion using as few as two or three well logs.
In experiments, ContrasInver achieved state-of-the-art performance in the synthetic data SEAM I.
It's the first data-driven approach yielding reliable results on the Netherlands F3 and Delft, using only three and two well logs respectively.
- Score: 7.356328937024184
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The automated interpretation and inversion of seismic data have advanced
significantly with the development of Deep Learning (DL) methods. However,
these methods often require numerous costly well logs, limiting their
application only to mature or synthetic data. This paper presents ContrasInver,
a method that achieves seismic inversion using as few as two or three well
logs, significantly reducing current requirements. In ContrasInver, we propose
three key innovations to address the challenges of applying semi-supervised
learning to regression tasks with ultra-sparse labels. The Multi-dimensional
Sample Generation (MSG) technique pioneers a paradigm for sample generation in
multi-dimensional inversion. It produces a large number of diverse samples from
a single well, while establishing lateral continuity in seismic data. MSG
yields substantial improvements over current techniques, even without the use
of semi-supervised learning. The Region-Growing Training (RGT) strategy
leverages the inherent continuity of seismic data, effectively propagating
accuracy from closer to more distant regions based on the proximity of well
logs. The Impedance Vectorization Projection (IVP) vectorizes impedance values
and performs semi-supervised learning in a compressed space. We demonstrated
that the Jacobian matrix derived from this space can filter out some outlier
components in pseudo-label vectors, thereby solving the value confusion issue
in semi-supervised regression learning. In the experiments, ContrasInver
achieved state-of-the-art performance in the synthetic data SEAM I. In the
field data with two or three well logs, only the methods based on the
components proposed in this paper were able to achieve reasonable results. It's
the first data-driven approach yielding reliable results on the Netherlands F3
and Delft, using only three and two well logs respectively.
Related papers
- Towards Modality-agnostic Label-efficient Segmentation with Entropy-Regularized Distribution Alignment [62.73503467108322]
This topic is widely studied in 3D point cloud segmentation due to the difficulty of annotating point clouds densely.
Until recently, pseudo-labels have been widely employed to facilitate training with limited ground-truth labels.
Existing pseudo-labeling approaches could suffer heavily from the noises and variations in unlabelled data.
We propose a novel learning strategy to regularize the pseudo-labels generated for training, thus effectively narrowing the gaps between pseudo-labels and model predictions.
arXiv Detail & Related papers (2024-08-29T13:31:15Z) - MDM: Advancing Multi-Domain Distribution Matching for Automatic Modulation Recognition Dataset Synthesis [35.07663680944459]
Deep learning technology has been successfully introduced into Automatic Modulation Recognition (AMR) tasks.
The success of deep learning is all attributed to the training on large-scale datasets.
In order to solve the problem of large amount of data, some researchers put forward the method of data distillation.
arXiv Detail & Related papers (2024-08-05T14:16:54Z) - All Points Matter: Entropy-Regularized Distribution Alignment for
Weakly-supervised 3D Segmentation [67.30502812804271]
Pseudo-labels are widely employed in weakly supervised 3D segmentation tasks where only sparse ground-truth labels are available for learning.
We propose a novel learning strategy to regularize the generated pseudo-labels and effectively narrow the gaps between pseudo-labels and model predictions.
arXiv Detail & Related papers (2023-05-25T08:19:31Z) - MV-JAR: Masked Voxel Jigsaw and Reconstruction for LiDAR-Based
Self-Supervised Pre-Training [58.07391711548269]
Masked Voxel Jigsaw and Reconstruction (MV-JAR) method for LiDAR-based self-supervised pre-training.
Masked Voxel Jigsaw and Reconstruction (MV-JAR) method for LiDAR-based self-supervised pre-training.
arXiv Detail & Related papers (2023-03-23T17:59:02Z) - Are Negative Samples Necessary in Entity Alignment? An Approach with
High Performance, Scalability and Robustness [26.04006507181558]
We propose a novel EA method with three new components to enable high Performance, high Scalability, and high Robustness.
We conduct detailed experiments on several public datasets to examine the effectiveness and efficiency of our proposed method.
arXiv Detail & Related papers (2021-08-11T15:20:41Z) - Few-Shot Named Entity Recognition: A Comprehensive Study [92.40991050806544]
We investigate three schemes to improve the model generalization ability for few-shot settings.
We perform empirical comparisons on 10 public NER datasets with various proportions of labeled data.
We create new state-of-the-art results on both few-shot and training-free settings.
arXiv Detail & Related papers (2020-12-29T23:43:16Z) - Attentional-Biased Stochastic Gradient Descent [74.49926199036481]
We present a provable method (named ABSGD) for addressing the data imbalance or label noise problem in deep learning.
Our method is a simple modification to momentum SGD where we assign an individual importance weight to each sample in the mini-batch.
ABSGD is flexible enough to combine with other robust losses without any additional cost.
arXiv Detail & Related papers (2020-12-13T03:41:52Z) - Unsupervised Learning of slow features for Data Efficient Regression [15.73372211126635]
We propose the slow variational autoencoder (S-VAE), an extension to the $beta$-VAE which applies a temporal similarity constraint to the latent representations.
We evaluate the three methods against their data-efficiency on down-stream tasks using a synthetic 2D ball tracking dataset, a dataset from a reinforcent learning environment and a dataset generated using the DeepMind Lab environment.
arXiv Detail & Related papers (2020-12-11T12:19:45Z) - Understanding Self-supervised Learning with Dual Deep Networks [74.92916579635336]
We propose a novel framework to understand contrastive self-supervised learning (SSL) methods that employ dual pairs of deep ReLU networks.
We prove that in each SGD update of SimCLR with various loss functions, the weights at each layer are updated by a emphcovariance operator.
To further study what role the covariance operator plays and which features are learned in such a process, we model data generation and augmentation processes through a emphhierarchical latent tree model (HLTM)
arXiv Detail & Related papers (2020-10-01T17:51:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.