Dissecting U-net for Seismic Application: An In-Depth Study on Deep
Learning Multiple Removal
- URL: http://arxiv.org/abs/2206.12112v1
- Date: Fri, 24 Jun 2022 07:16:27 GMT
- Title: Dissecting U-net for Seismic Application: An In-Depth Study on Deep
Learning Multiple Removal
- Authors: Ricard Durall, Ammar Ghanim, Norman Ettrich, Janis Keuper
- Abstract summary: Seismic processing often requires suppressing multiples that appear when collecting data.
We present a deep learning-based alternative that provides competitive results, while reducing its usage's complexity.
- Score: 3.058685580689605
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Seismic processing often requires suppressing multiples that appear when
collecting data. To tackle these artifacts, practitioners usually rely on Radon
transform-based algorithms as post-migration gather conditioning. However, such
traditional approaches are both time-consuming and parameter-dependent, making
them fairly complex. In this work, we present a deep learning-based alternative
that provides competitive results, while reducing its usage's complexity, and
hence democratizing its applicability. We observe an excellent performance of
our network when inferring complex field data, despite the fact of being solely
trained on synthetics. Furthermore, extensive experiments show that our
proposal can preserve the inherent characteristics of the data, avoiding
undesired over-smoothed results, while removing the multiples. Finally, we
conduct an in-depth analysis of the model, where we pinpoint the effects of the
main hyperparameters with physical events. To the best of our knowledge, this
study pioneers the unboxing of neural networks for the demultiple process,
helping the user to gain insights into the inside running of the network.
Related papers
- Data-Efficient Sleep Staging with Synthetic Time Series Pretraining [1.642094639107215]
We propose a pretraining task termed "frequency pretraining" to pretrain a neural network for sleep staging.
Our experiments demonstrate that our method surpasses fully supervised learning in scenarios with limited data and few subjects.
arXiv Detail & Related papers (2024-03-13T14:57:10Z) - Fighting over-fitting with quantization for learning deep neural
networks on noisy labels [7.09232719022402]
We study the ability of compression methods to tackle both of these problems at once.
We hypothesize that quantization-aware training, by restricting the expressivity of neural networks, behaves as a regularization.
arXiv Detail & Related papers (2023-03-21T12:36:58Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - BatchFormer: Learning to Explore Sample Relationships for Robust
Representation Learning [93.38239238988719]
We propose to enable deep neural networks with the ability to learn the sample relationships from each mini-batch.
BatchFormer is applied into the batch dimension of each mini-batch to implicitly explore sample relationships during training.
We perform extensive experiments on over ten datasets and the proposed method achieves significant improvements on different data scarcity applications.
arXiv Detail & Related papers (2022-03-03T05:31:33Z) - An Information-Theoretic Framework for Supervised Learning [22.280001450122175]
We propose a novel information-theoretic framework with its own notions of regret and sample complexity.
We study the sample complexity of learning from data generated by deep neural networks with ReLU activation units.
We conclude by corroborating our theoretical results with experimental analysis of random single-hidden-layer neural networks.
arXiv Detail & Related papers (2022-03-01T05:58:28Z) - Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay [67.50637511633212]
A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
arXiv Detail & Related papers (2021-12-09T07:11:14Z) - Deep Cellular Recurrent Network for Efficient Analysis of Time-Series
Data with Spatial Information [52.635997570873194]
This work proposes a novel deep cellular recurrent neural network (DCRNN) architecture to process complex multi-dimensional time series data with spatial information.
The proposed architecture achieves state-of-the-art performance while utilizing substantially less trainable parameters when compared to comparable methods in the literature.
arXiv Detail & Related papers (2021-01-12T20:08:18Z) - Active Importance Sampling for Variational Objectives Dominated by Rare
Events: Consequences for Optimization and Generalization [12.617078020344618]
We introduce an approach that combines rare events sampling techniques with neural network optimization to optimize objective functions dominated by rare events.
We show that importance sampling reduces the variance of the solution to a learning problem, suggesting benefits for generalization.
Our numerical experiments demonstrate that we can successfully learn even with the compounding difficulties of high-dimensional and rare data.
arXiv Detail & Related papers (2020-08-11T23:38:09Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z) - On the Robustness of Active Learning [0.7340017786387767]
Active Learning is concerned with how to identify the most useful samples for a Machine Learning algorithm to be trained with.
We find that it is often applied with not enough care and domain knowledge.
We propose the new "Sum of Squared Logits" method based on the Simpson diversity index and investigate the effect of using the confusion matrix for balancing in sample selection.
arXiv Detail & Related papers (2020-06-18T09:07:23Z) - Beyond Dropout: Feature Map Distortion to Regularize Deep Neural
Networks [107.77595511218429]
In this paper, we investigate the empirical Rademacher complexity related to intermediate layers of deep neural networks.
We propose a feature distortion method (Disout) for addressing the aforementioned problem.
The superiority of the proposed feature map distortion for producing deep neural network with higher testing performance is analyzed and demonstrated.
arXiv Detail & Related papers (2020-02-23T13:59:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.