Seismic Fault Segmentation via 3D-CNN Training by a Few 2D Slices Labels
- URL: http://arxiv.org/abs/2105.03857v1
- Date: Sun, 9 May 2021 07:13:40 GMT
- Title: Seismic Fault Segmentation via 3D-CNN Training by a Few 2D Slices Labels
- Authors: YiMin Dou, Kewen Li, Jianbing Zhu, Xiao Li, Yingjie Xi
- Abstract summary: We present a new binary cross-entropy and smooth L1 loss to train 3D-CNN by sampling some 2D slices from 3D seismic data.
Experiments show that our method can extract 3D seismic features from a few 2D slices labels on real data, to segment a complete fault volume.
- Score: 6.963867115353744
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Detection faults in seismic data is a crucial step for seismic structural
interpretation, reservoir characterization and well placement, and it is full
of challenges. Some recent works regard fault detection as an image
segmentation task. The task of image segmentation requires a large amount of
data labels, especially 3D seismic data, which has a complex structure and a
lot of noise. Therefore, its annotation requires expert experience and a huge
workload, wrong labeling and missing labeling will affect the segmentation
performance of the model. In this study, we present a new binary cross-entropy
and smooth L1 loss ({\lambda}-BCE and {\lambda}-smooth L1) to effectively train
3D-CNN by sampling some 2D slices from 3D seismic data, so that the model can
learn the segmentation of 3D seismic data from a few 2D slices. In order to
fully extract information from limited and low-dimensional data and suppress
seismic noise, we propose an attention module that can be used for active
supervision training (Active Attention Module, AAM) and embedded in the network
to participate in the differentiation and optimization of the model. During
training, the attention heatmap target is generated by the original binary
label, and letting it supervise the attention module using the {\lambda}-smooth
L1 loss. Qualitative experiments show that our method can extract 3D seismic
features from a few 2D slices labels on real data, to segment a complete fault
volume. Through visualization, the segmentation effect achieves
state-of-the-art. Quantitative experiments on synthetic data prove the
effectiveness of our training method and attention module. Experiments show
that using our method, labeling one 2D slice every 30 frames at least (3.3% of
the original label), the model can achieve a segmentation performance similar
to that of a 3D label.
Related papers
- Bayesian Self-Training for Semi-Supervised 3D Segmentation [59.544558398992386]
3D segmentation is a core problem in computer vision.
densely labeling 3D point clouds to employ fully-supervised training remains too labor intensive and expensive.
Semi-supervised training provides a more practical alternative, where only a small set of labeled data is given, accompanied by a larger unlabeled set.
arXiv Detail & Related papers (2024-09-12T14:54:31Z) - Towards Modality-agnostic Label-efficient Segmentation with Entropy-Regularized Distribution Alignment [62.73503467108322]
This topic is widely studied in 3D point cloud segmentation due to the difficulty of annotating point clouds densely.
Until recently, pseudo-labels have been widely employed to facilitate training with limited ground-truth labels.
Existing pseudo-labeling approaches could suffer heavily from the noises and variations in unlabelled data.
We propose a novel learning strategy to regularize the pseudo-labels generated for training, thus effectively narrowing the gaps between pseudo-labels and model predictions.
arXiv Detail & Related papers (2024-08-29T13:31:15Z) - Label-Efficient 3D Brain Segmentation via Complementary 2D Diffusion Models with Orthogonal Views [10.944692719150071]
We propose a novel 3D brain segmentation approach using complementary 2D diffusion models.
Our goal is to achieve reliable segmentation quality without requiring complete labels for each individual subject.
arXiv Detail & Related papers (2024-07-17T06:14:53Z) - 3D Open-Vocabulary Panoptic Segmentation with 2D-3D Vision-Language Distillation [40.49322398635262]
We propose the first method to tackle 3D open-vocabulary panoptic segmentation.
Our model takes advantage of the fusion between learnable LiDAR features and dense frozen vision CLIP features.
We propose two novel loss functions: object-level distillation loss and voxel-level distillation loss.
arXiv Detail & Related papers (2024-01-04T18:39:32Z) - 3D Adversarial Augmentations for Robust Out-of-Domain Predictions [115.74319739738571]
We focus on improving the generalization to out-of-domain data.
We learn a set of vectors that deform the objects in an adversarial fashion.
We perform adversarial augmentation by applying the learned sample-independent vectors to the available objects when training a model.
arXiv Detail & Related papers (2023-08-29T17:58:55Z) - Augment and Criticize: Exploring Informative Samples for Semi-Supervised
Monocular 3D Object Detection [64.65563422852568]
We improve the challenging monocular 3D object detection problem with a general semi-supervised framework.
We introduce a novel, simple, yet effective Augment and Criticize' framework that explores abundant informative samples from unlabeled data.
The two new detectors, dubbed 3DSeMo_DLE and 3DSeMo_FLEX, achieve state-of-the-art results with remarkable improvements for over 3.5% AP_3D/BEV (Easy) on KITTI.
arXiv Detail & Related papers (2023-03-20T16:28:15Z) - LWSIS: LiDAR-guided Weakly Supervised Instance Segmentation for
Autonomous Driving [34.119642131912485]
We present a more artful framework, LiDAR-guided Weakly Supervised Instance (LWSIS)
LWSIS uses the off-the-shelf 3D data, i.e., Point Cloud, together with the 3D boxes, as natural weak supervisions for training the 2D image instance segmentation models.
Our LWSIS not only exploits the complementary information in multimodal data during training, but also significantly reduces the cost of the dense 2D masks.
arXiv Detail & Related papers (2022-12-07T08:08:01Z) - Cylindrical and Asymmetrical 3D Convolution Networks for LiDAR-based
Perception [122.53774221136193]
State-of-the-art methods for driving-scene LiDAR-based perception often project the point clouds to 2D space and then process them via 2D convolution.
A natural remedy is to utilize the 3D voxelization and 3D convolution network.
We propose a new framework for the outdoor LiDAR segmentation, where cylindrical partition and asymmetrical 3D convolution networks are designed to explore the 3D geometric pattern.
arXiv Detail & Related papers (2021-09-12T06:25:11Z) - ST3D: Self-training for Unsupervised Domain Adaptation on 3D
ObjectDetection [78.71826145162092]
We present a new domain adaptive self-training pipeline, named ST3D, for unsupervised domain adaptation on 3D object detection from point clouds.
Our ST3D achieves state-of-the-art performance on all evaluated datasets and even surpasses fully supervised results on KITTI 3D object detection benchmark.
arXiv Detail & Related papers (2021-03-09T10:51:24Z) - 3D Guided Weakly Supervised Semantic Segmentation [27.269847900950943]
We propose a weakly supervised 2D semantic segmentation model by incorporating sparse bounding box labels with available 3D information.
We manually labeled a subset of the 2D-3D Semantics(2D-3D-S) dataset with bounding boxes, and introduce our 2D-3D inference module to generate accurate pixel-wise segment proposal masks.
arXiv Detail & Related papers (2020-12-01T03:34:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.