Deep learning based sferics recognition for AMT data processing in the
dead band
- URL: http://arxiv.org/abs/2209.13647v1
- Date: Thu, 22 Sep 2022 02:31:28 GMT
- Title: Deep learning based sferics recognition for AMT data processing in the
dead band
- Authors: Enhua Jiang, Rujun Chen, Xinming Wu, Jianxin Liu, Debin Zhu and
Weiqiang Liu
- Abstract summary: In the audio magnetotellurics (AMT) sounding data processing, the absence of sferic signals in some time ranges typically results in a lack of energy in the AMT dead band.
We propose a deep convolutional neural network (CNN) to automatically recognize sferic signals from redundantly recorded data in a long time range.
Our method can significantly improve S/N and effectively solve the problem of lack of energy in dead band.
- Score: 5.683853455697258
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the audio magnetotellurics (AMT) sounding data processing, the absence of
sferic signals in some time ranges typically results in a lack of energy in the
AMT dead band, which may cause unreliable resistivity estimate. We propose a
deep convolutional neural network (CNN) to automatically recognize sferic
signals from redundantly recorded data in a long time range and use them to
compensate for the resistivity estimation. We train the CNN by using field time
series data with different signal to noise rations that were acquired from
different regions in mainland China. To solve the potential overfitting problem
due to the limited number of sferic labels, we propose a training strategy that
randomly generates training samples (with random data augmentations) while
optimizing the CNN model parameters. We stop the training process and data
generation until the training loss converges. In addition, we use a weighted
binary cross-entropy loss function to solve the sample imbalance problem to
better optimize the network, use multiple reasonable metrics to evaluate
network performance, and carry out ablation experiments to optimally choose the
model hyperparameters. Extensive field data applications show that our trained
CNN can robustly recognize sferic signals from noisy time series for subsequent
impedance estimation. The subsequent processing results show that our method
can significantly improve S/N and effectively solve the problem of lack of
energy in dead band. Compared to the traditional processing method without
sferic compensation, our method can generate a smoother and more reasonable
apparent resistivity-phase curves and depolarized phase tensor, correct the
estimation error of sudden drop of high-frequency apparent resistivity and
abnormal behavior of phase reversal, and finally better restore the real
shallow subsurface resistivity structure.
Related papers
- Efficient NeRF Optimization -- Not All Samples Remain Equally Hard [9.404889815088161]
We propose an application of online hard sample mining for efficient training of Neural Radiance Fields (NeRF)
NeRF models produce state-of-the-art quality for many 3D reconstruction and rendering tasks but require substantial computational resources.
arXiv Detail & Related papers (2024-08-06T13:49:01Z) - Adaptive Sampling for Deep Learning via Efficient Nonparametric Proxies [35.29595714883275]
We develop an efficient sketch-based approximation to the Nadaraya-Watson estimator.
Our sampling algorithm outperforms the baseline in terms of wall-clock time and accuracy on four datasets.
arXiv Detail & Related papers (2023-11-22T18:40:18Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Wave simulation in non-smooth media by PINN with quadratic neural
network and PML condition [2.7651063843287718]
The recently proposed physics-informed neural network (PINN) has achieved successful applications in solving a wide range of partial differential equations (PDEs)
In this paper, we solve the acoustic and visco-acoustic scattered-field wave equation in the frequency domain with PINN instead of the wave equation to remove source perturbation.
We show that PML and quadratic neurons improve the results as well as attenuation and discuss the reason for this improvement.
arXiv Detail & Related papers (2022-08-16T13:29:01Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - An advanced spatio-temporal convolutional recurrent neural network for
storm surge predictions [73.4962254843935]
We study the capability of artificial neural network models to emulate storm surge based on the storm track/size/intensity history.
This study presents a neural network model that can predict storm surge, informed by a database of synthetic storm simulations.
arXiv Detail & Related papers (2022-04-18T23:42:18Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - Inverse-Dirichlet Weighting Enables Reliable Training of Physics
Informed Neural Networks [2.580765958706854]
We describe and remedy a failure mode that may arise from multi-scale dynamics with scale imbalances during training of deep neural networks.
PINNs are popular machine-learning templates that allow for seamless integration of physical equation models with data.
For inverse modeling using sequential training, we find that inverse-Dirichlet weighting protects a PINN against catastrophic forgetting.
arXiv Detail & Related papers (2021-07-02T10:01:37Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - Predicting Training Time Without Training [120.92623395389255]
We tackle the problem of predicting the number of optimization steps that a pre-trained deep network needs to converge to a given value of the loss function.
We leverage the fact that the training dynamics of a deep network during fine-tuning are well approximated by those of a linearized model.
We are able to predict the time it takes to fine-tune a model to a given loss without having to perform any training.
arXiv Detail & Related papers (2020-08-28T04:29:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.