Interpretable Deep Learning Paradigm for Airborne Transient Electromagnetic Inversion
- URL: http://arxiv.org/abs/2503.22214v1
- Date: Fri, 28 Mar 2025 08:01:20 GMT
- Title: Interpretable Deep Learning Paradigm for Airborne Transient Electromagnetic Inversion
- Authors: Shuang Wang, Xuben Wang, Fei Deng, Xiaodong Yu, Peifan Jiang, Lifeng Mao,
- Abstract summary: We propose a unified and interpretable deep learning inversion paradigm based on disentangled representation learning.<n>We show that our method can directly use noisy data to accurately reconstruct the subsurface electrical structure.
- Score: 8.868747425596396
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The extraction of geoelectric structural information from airborne transient electromagnetic(ATEM)data primarily involves data processing and inversion. Conventional methods rely on empirical parameter selection, making it difficult to process complex field data with high noise levels. Additionally, inversion computations are time consuming and often suffer from multiple local minima. Existing deep learning-based approaches separate the data processing steps, where independently trained denoising networks struggle to ensure the reliability of subsequent inversions. Moreover, end to end networks lack interpretability. To address these issues, we propose a unified and interpretable deep learning inversion paradigm based on disentangled representation learning. The network explicitly decomposes noisy data into noise and signal factors, completing the entire data processing workflow based on the signal factors while incorporating physical information for guidance. This approach enhances the network's reliability and interpretability. The inversion results on field data demonstrate that our method can directly use noisy data to accurately reconstruct the subsurface electrical structure. Furthermore, it effectively processes data severely affected by environmental noise, which traditional methods struggle with, yielding improved lateral structural resolution.
Related papers
- DREMnet: An Interpretable Denoising Framework for Semi-Airborne Transient Electromagnetic Signal [10.676243830905754]
The SATEM method is capable of conducting rapid surveys over large-scale and hard-to-reach areas.<n>Traditional denoising techniques rely on parameter selection strategies, which are insufficient for processing field data in noisy environments.<n>We propose an interpretable decoupled representation learning framework, termed DREMnet, that disentangles data into content and context factors.
arXiv Detail & Related papers (2025-03-28T08:13:23Z) - A convolutional neural network approach to deblending seismic data [1.5488464287814563]
We present a data-driven deep learning-based method for fast and efficient seismic deblending.
A convolutional neural network (CNN) is designed according to the special character of seismic data.
After training and validation of the network, seismic deblending can be performed in near real time.
arXiv Detail & Related papers (2024-09-12T10:54:35Z) - Hierarchical Over-the-Air Federated Learning with Awareness of
Interference and Data Heterogeneity [3.8798345704175534]
We introduce a scalable transmission scheme that efficiently uses a single wireless resource through over-the-air computation.
We show that despite the interference and the data heterogeneity, the proposed scheme achieves high learning accuracy and can significantly outperform the conventional hierarchical algorithm.
arXiv Detail & Related papers (2024-01-02T21:43:01Z) - Loop Polarity Analysis to Avoid Underspecification in Deep Learning [0.0]
In this paper, we turn to loop polarity analysis as a tool for specifying the causal structure of a data-generating process.
We show how measuring the polarity of the different feedback loops that compose a system can lead to more robust inferences on the part of neural networks.
arXiv Detail & Related papers (2023-09-18T23:49:42Z) - Minimizing the Accumulated Trajectory Error to Improve Dataset
Distillation [151.70234052015948]
We propose a novel approach that encourages the optimization algorithm to seek a flat trajectory.
We show that the weights trained on synthetic data are robust against the accumulated errors perturbations with the regularization towards the flat trajectory.
Our method, called Flat Trajectory Distillation (FTD), is shown to boost the performance of gradient-matching methods by up to 4.7%.
arXiv Detail & Related papers (2022-11-20T15:49:11Z) - Deep Preconditioners and their application to seismic wavefield
processing [0.0]
Sparsity-promoting inversion, coupled with fixed-basis sparsifying transforms, represent the go-to approach for many processing tasks.
We propose to train an AutoEncoder network to learn a direct mapping between the input seismic data and a representative latent manifold.
The trained decoder is subsequently used as a nonlinear preconditioner for the physics-driven inverse problem at hand.
arXiv Detail & Related papers (2022-07-20T14:25:32Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - Deep Active Learning with Noise Stability [24.54974925491753]
Uncertainty estimation for unlabeled data is crucial to active learning.
We propose a novel algorithm that leverages noise stability to estimate data uncertainty.
Our method is generally applicable in various tasks, including computer vision, natural language processing, and structural data analysis.
arXiv Detail & Related papers (2022-05-26T13:21:01Z) - Context-Preserving Instance-Level Augmentation and Deformable
Convolution Networks for SAR Ship Detection [50.53262868498824]
Shape deformation of targets in SAR image due to random orientation and partial information loss is an essential challenge in SAR ship detection.
We propose a data augmentation method to train a deep network that is robust to partial information loss within the targets.
arXiv Detail & Related papers (2022-02-14T07:01:01Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - Semantic Perturbations with Normalizing Flows for Improved
Generalization [62.998818375912506]
We show that perturbations in the latent space can be used to define fully unsupervised data augmentations.
We find that our latent adversarial perturbations adaptive to the classifier throughout its training are most effective.
arXiv Detail & Related papers (2021-08-18T03:20:00Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - Solving Sparse Linear Inverse Problems in Communication Systems: A Deep
Learning Approach With Adaptive Depth [51.40441097625201]
We propose an end-to-end trainable deep learning architecture for sparse signal recovery problems.
The proposed method learns how many layers to execute to emit an output, and the network depth is dynamically adjusted for each task in the inference phase.
arXiv Detail & Related papers (2020-10-29T06:32:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.