StorSeismic: A new paradigm in deep learning for seismic processing
- URL: http://arxiv.org/abs/2205.00222v1
- Date: Sat, 30 Apr 2022 09:55:00 GMT
- Title: StorSeismic: A new paradigm in deep learning for seismic processing
- Authors: Randy Harsuko and Tariq Alkhalifah
- Abstract summary: StorSeismic is a framework for seismic data processing.
We pre-train seismic data, along with synthetically generated ones, in the self-supervised step.
Then, we use the labeled synthetic data to fine-tune the pre-trained network in a supervised fashion to perform various seismic processing tasks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learned tasks on seismic data are often trained sequentially and
separately, even though they utilize the same features (i.e. geometrical) of
the data. We present StorSeismic, as a framework for seismic data processing,
which consists of neural network pre-training and fine-tuning procedures. We,
specifically, utilize a neural network as a preprocessing model to store
seismic data features of a particular dataset for any downstream tasks. After
pre-training, the resulting model can be utilized later, through a fine-tuning
procedure, to perform tasks using limited additional training. Used often in
Natural Language Processing (NLP) and lately in vision tasks, BERT
(Bidirectional Encoder Representations from Transformer), a form of a
Transformer model, provides an optimal platform for this framework. The
attention mechanism of BERT, applied here on a sequence of traces within the
shot gather, is able to capture and store key geometrical features of the
seismic data. We pre-train StorSeismic on field data, along with synthetically
generated ones, in the self-supervised step. Then, we use the labeled synthetic
data to fine-tune the pre-trained network in a supervised fashion to perform
various seismic processing tasks, like denoising, velocity estimation, first
arrival picking, and NMO. Finally, the fine-tuned model is used to obtain
satisfactory inference results on the field data.
Related papers
- A convolutional neural network approach to deblending seismic data [1.5488464287814563]
We present a data-driven deep learning-based method for fast and efficient seismic deblending.
A convolutional neural network (CNN) is designed according to the special character of seismic data.
After training and validation of the network, seismic deblending can be performed in near real time.
arXiv Detail & Related papers (2024-09-12T10:54:35Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - Neural Koopman prior for data assimilation [7.875955593012905]
We use a neural network architecture to embed dynamical systems in latent spaces.
We introduce methods that enable to train such a model for long-term continuous reconstruction.
The potential for self-supervised learning is also demonstrated, as we show the promising use of trained dynamical models as priors for variational data assimilation techniques.
arXiv Detail & Related papers (2023-09-11T09:04:36Z) - Optimizing a Transformer-based network for a deep learning seismic
processing workflow [0.0]
StorSeismic is a recently introduced model based on the Transformer to adapt to various seismic processing tasks.
We observe faster pretraining and competitive results on the fine-tuning tasks and, additionally, fewer parameters to train compared to the vanilla model.
arXiv Detail & Related papers (2023-08-09T07:11:42Z) - SynBench: Task-Agnostic Benchmarking of Pretrained Representations using
Synthetic Data [78.21197488065177]
Recent success in fine-tuning large models, that are pretrained on broad data at scale, on downstream tasks has led to a significant paradigm shift in deep learning.
This paper proposes a new task-agnostic framework, textitSynBench, to measure the quality of pretrained representations using synthetic data.
arXiv Detail & Related papers (2022-10-06T15:25:00Z) - Self-Distillation for Further Pre-training of Transformers [83.84227016847096]
We propose self-distillation as a regularization for a further pre-training stage.
We empirically validate the efficacy of self-distillation on a variety of benchmark datasets for image and text classification tasks.
arXiv Detail & Related papers (2022-09-30T02:25:12Z) - MLReal: Bridging the gap between training on synthetic data and real
data applications in machine learning [1.9852463786440129]
We describe a novel approach to enhance supervised training on synthetic data with real data features.
In the training stage, the input data are from the synthetic domain and the auto-correlated data are from the real domain.
In the inference/application stage, the input data are from the real subset domain and the mean of the autocorrelated sections are from the synthetic data subset domain.
arXiv Detail & Related papers (2021-09-11T14:43:34Z) - Scene Synthesis via Uncertainty-Driven Attribute Synchronization [52.31834816911887]
This paper introduces a novel neural scene synthesis approach that can capture diverse feature patterns of 3D scenes.
Our method combines the strength of both neural network-based and conventional scene synthesis approaches.
arXiv Detail & Related papers (2021-08-30T19:45:07Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - Transfer Learning with Convolutional Networks for Atmospheric Parameter
Retrieval [14.131127382785973]
The Infrared Atmospheric Sounding Interferometer (IASI) on board the MetOp satellite series provides important measurements for Numerical Weather Prediction (NWP)
Retrieving accurate atmospheric parameters from the raw data provided by IASI is a large challenge, but necessary in order to use the data in NWP models.
We show how features extracted from the IASI data by a CNN trained to predict a physical variable can be used as inputs to another statistical method designed to predict a different physical variable at low altitude.
arXiv Detail & Related papers (2020-12-09T09:28:42Z) - Pre-Trained Models for Heterogeneous Information Networks [57.78194356302626]
We propose a self-supervised pre-training and fine-tuning framework, PF-HIN, to capture the features of a heterogeneous information network.
PF-HIN consistently and significantly outperforms state-of-the-art alternatives on each of these tasks, on four datasets.
arXiv Detail & Related papers (2020-07-07T03:36:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.