Transfer learning for self-supervised, blind-spot seismic denoising
- URL: http://arxiv.org/abs/2209.12210v1
- Date: Sun, 25 Sep 2022 12:58:10 GMT
- Title: Transfer learning for self-supervised, blind-spot seismic denoising
- Authors: Claire Birnie and Tariq Alkhalifah
- Abstract summary: We propose an initial, supervised training of the network on a frugally-generated synthetic dataset prior to fine-tuning in a self-supervised manner on the field dataset of interest.
Considering the change in peak signal-to-noise ratio, as well as the volume of noise reduced and signal leakage observed, we illustrate the clear benefit in initialising the self-supervised network with the weights from a supervised base-training.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Noise in seismic data arises from numerous sources and is continually
evolving. The use of supervised deep learning procedures for denoising of
seismic datasets often results in poor performance: this is due to the lack of
noise-free field data to act as training targets and the large difference in
characteristics between synthetic and field datasets. Self-supervised,
blind-spot networks typically overcome these limitation by training directly on
the raw, noisy data. However, such networks often rely on a random noise
assumption, and their denoising capabilities quickly decrease in the presence
of even minimally-correlated noise. Extending from blind-spots to blind-masks
can efficiently suppress coherent noise along a specific direction, but it
cannot adapt to the ever-changing properties of noise. To preempt the network's
ability to predict the signal and reduce its opportunity to learn the noise
properties, we propose an initial, supervised training of the network on a
frugally-generated synthetic dataset prior to fine-tuning in a self-supervised
manner on the field dataset of interest. Considering the change in peak
signal-to-noise ratio, as well as the volume of noise reduced and signal
leakage observed, we illustrate the clear benefit in initialising the
self-supervised network with the weights from a supervised base-training. This
is further supported by a test on a field dataset where the fine-tuned network
strikes the best balance between signal preservation and noise reduction.
Finally, the use of the unrealistic, frugally-generated synthetic dataset for
the supervised base-training includes a number of benefits: minimal prior
geological knowledge is required, substantially reduced computational cost for
the dataset generation, and a reduced requirement of re-training the network
should recording conditions change, to name a few.
Related papers
- Effective Noise-aware Data Simulation for Domain-adaptive Speech Enhancement Leveraging Dynamic Stochastic Perturbation [25.410770364140856]
Cross-domain speech enhancement (SE) is often faced with severe challenges due to the scarcity of noise and background information in an unseen target domain.
This study puts forward a novel data simulation method to address this issue, leveraging noise-extractive techniques and generative adversarial networks (GANs)
We introduce the notion of dynamic perturbation, which can inject controlled perturbations into the noise embeddings during inference.
arXiv Detail & Related papers (2024-09-03T02:29:01Z) - Learning with Noisy Foundation Models [95.50968225050012]
This paper is the first work to comprehensively understand and analyze the nature of noise in pre-training datasets.
We propose a tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise and improve generalization.
arXiv Detail & Related papers (2024-03-11T16:22:41Z) - Understanding and Mitigating the Label Noise in Pre-training on
Downstream Tasks [91.15120211190519]
This paper aims to understand the nature of noise in pre-training datasets and to mitigate its impact on downstream tasks.
We propose a light-weight black-box tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise.
arXiv Detail & Related papers (2023-09-29T06:18:15Z) - Explainable Artificial Intelligence driven mask design for
self-supervised seismic denoising [0.0]
Self-supervised coherent noise suppression methods require extensive knowledge of the noise statistics.
We propose the use of explainable artificial intelligence approaches to see inside the black box that is the denoising network.
We show that a simple averaging of the Jacobian contributions over a number of randomly selected input pixels, provides an indication of the most effective mask.
arXiv Detail & Related papers (2023-07-13T11:02:55Z) - Latent Class-Conditional Noise Model [54.56899309997246]
We introduce a Latent Class-Conditional Noise model (LCCN) to parameterize the noise transition under a Bayesian framework.
We then deduce a dynamic label regression method for LCCN, whose Gibbs sampler allows us efficiently infer the latent true labels.
Our approach safeguards the stable update of the noise transition, which avoids previous arbitrarily tuning from a mini-batch of samples.
arXiv Detail & Related papers (2023-02-19T15:24:37Z) - On-the-fly Denoising for Data Augmentation in Natural Language
Understanding [101.46848743193358]
We propose an on-the-fly denoising technique for data augmentation that learns from soft augmented labels provided by an organic teacher model trained on the cleaner original data.
Our method can be applied to general augmentation techniques and consistently improve the performance on both text classification and question-answering tasks.
arXiv Detail & Related papers (2022-12-20T18:58:33Z) - Weak-signal extraction enabled by deep-neural-network denoising of
diffraction data [26.36525764239897]
We show how data can be denoised via a deep convolutional neural network.
We demonstrate that weak signals stemming from charge ordering, insignificant in the noisy data, become visible and accurate in the denoised data.
arXiv Detail & Related papers (2022-09-19T14:43:01Z) - Removing Noise from Extracellular Neural Recordings Using Fully
Convolutional Denoising Autoencoders [62.997667081978825]
We propose a Fully Convolutional Denoising Autoencoder, which learns to produce a clean neuronal activity signal from a noisy multichannel input.
The experimental results on simulated data show that our proposed method can improve significantly the quality of noise-corrupted neural signals.
arXiv Detail & Related papers (2021-09-18T14:51:24Z) - The potential of self-supervised networks for random noise suppression
in seismic data [0.0]
Blind-spot networks are shown to be an efficient suppressor of random noise in seismic data.
Results are compared with two commonly used random denoising techniques: FX-deconvolution and Curvelet transform.
We believe this is just the beginning of utilising self-supervised learning in seismic applications.
arXiv Detail & Related papers (2021-09-15T14:57:43Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - Joint self-supervised blind denoising and noise estimation [0.0]
Two neural networks jointly predict the clean signal and infer the noise distribution.
We show empirically with synthetic noisy data that our model captures the noise distribution efficiently.
arXiv Detail & Related papers (2021-02-16T08:37:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.