Feature-based Inversion of 2.5D Controlled Source Electromagnetic Data using Generative Priors
- URL: http://arxiv.org/abs/2601.02145v1
- Date: Mon, 05 Jan 2026 14:18:14 GMT
- Title: Feature-based Inversion of 2.5D Controlled Source Electromagnetic Data using Generative Priors
- Authors: Hongyu Zhou, Haoran Sun, Rui Guo, Maokun Li, Fan Yang, Shenheng Xu,
- Abstract summary: In this study, we investigate feature-based 2.5D controlled source marine electromagnetic (mCSEM) data inversion using generative priors.<n>We adopt a plug-andplay strategy in which a variational autoencoder (VAE) is used solely to learn prior information on conductivity distributions.<n> Numerical and field experiments demonstrate that the proposed approach effectively incorporates prior information, improves reconstruction accuracy, and exhibits good generalization performance.
- Score: 44.090837880190115
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this study, we investigate feature-based 2.5D controlled source marine electromagnetic (mCSEM) data inversion using generative priors. Two-and-half dimensional modeling using finite difference method (FDM) is adopted to compute the response of horizontal electric dipole (HED) excitation. Rather than using a neural network to approximate the entire inverse mapping in a black-box manner, we adopt a plug-andplay strategy in which a variational autoencoder (VAE) is used solely to learn prior information on conductivity distributions. During the inversion process, the conductivity model is iteratively updated using the Gauss Newton method, while the model space is constrained by projections onto the learned VAE decoder. This framework preserves explicit control over data misfit and enables flexible adaptation to different survey configurations. Numerical and field experiments demonstrate that the proposed approach effectively incorporates prior information, improves reconstruction accuracy, and exhibits good generalization performance.
Related papers
- MEG-GPT: A transformer-based foundation model for magnetoencephalography data [6.336623115095147]
Recent advances in deep learning have enabled significant progress in other domains, such as language and vision, by using foundation models at scale.<n>Here, we introduce MEG-GPT, a transformer based foundation model that uses time-attention and next time-point prediction.<n>We trained MEG-GPT on tokenised brain region time-courses extracted from a large-scale MEG dataset.
arXiv Detail & Related papers (2025-10-20T20:18:38Z) - Fusing CFD and measurement data using transfer learning [49.1574468325115]
We introduce a non-linear method based on neural networks combining simulation and measurement data via transfer learning.<n>In a first step, the neural network is trained on simulation data to learn spatial features of the distributed quantities.<n>The second step involves transfer learning on the measurement data to correct for systematic errors between simulation and measurement by only re-training a small subset of the entire neural network model.
arXiv Detail & Related papers (2025-07-28T07:21:46Z) - Improving Out-of-Distribution Detection via Dynamic Covariance Calibration [12.001290283557466]
Out-of-Distribution (OOD) detection is essential for the trustworthiness of AI systems.<n>We argue that the influence of ill-distributed samples can be corrected by dynamically adjusting the prior geometry.<n>Our approach significantly enhances OOD detection across various models.
arXiv Detail & Related papers (2025-06-11T05:05:26Z) - Self-Refining Training for Amortized Density Functional Theory [5.5541132320126945]
We propose a novel method that reduces the dependency of amortized DFT solvers on large pre-collected datasets by introducing a self-refining training strategy.<n>We derive our method as a minimization of the variational upper bound on the KL-divergence measuring the discrepancy between the generated samples and the target Boltzmann distribution defined by the ground state energy.
arXiv Detail & Related papers (2025-06-02T00:32:32Z) - GP-FL: Model-Based Hessian Estimation for Second-Order Over-the-Air Federated Learning [52.295563400314094]
Second-order methods are widely adopted to improve the convergence rate of learning algorithms.<n>This paper introduces a novel second-order FL framework tailored for wireless channels.
arXiv Detail & Related papers (2024-12-05T04:27:41Z) - Physically Guided Deep Unsupervised Inversion for 1D Magnetotelluric Models [16.91835461818938]
This paper presents a new deep unsupervised inversion algorithm guided by physics to estimate 1D Magnetotelluric (MT) models.<n>Instead of using datasets with the observed data and their respective models as labels during training, our method employs a differentiable modeling operator that physically guides the cost function minimization.
arXiv Detail & Related papers (2024-10-20T04:17:59Z) - 3-D Magnetotelluric Deep Learning Inversion Guided by Pseudo-Physical Information [11.790581455752292]
Magnetotelluric deep learning (DL) inversion methods based on joint data-driven and physics-driven have become a hot topic in recent years.<n>We introduce pseudo-physical information through the forward modeling of neural networks (NNs) to compute this portion of the loss.<n>We propose a new input mode that involves masking and adding noise to the data, simulating the field data environment of 3-D MT inversion.
arXiv Detail & Related papers (2024-10-12T06:39:31Z) - Unsupervised Discovery of Interpretable Directions in h-space of
Pre-trained Diffusion Models [63.1637853118899]
We propose the first unsupervised and learning-based method to identify interpretable directions in h-space of pre-trained diffusion models.
We employ a shift control module that works on h-space of pre-trained diffusion models to manipulate a sample into a shifted version of itself.
By jointly optimizing them, the model will spontaneously discover disentangled and interpretable directions.
arXiv Detail & Related papers (2023-10-15T18:44:30Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z) - Data-Driven Shadowgraph Simulation of a 3D Object [50.591267188664666]
We are replacing the numerical code by a computationally cheaper projection based surrogate model.
The model is able to approximate the electric fields at a given time without computing all preceding electric fields as required by numerical methods.
This model has shown a good quality reconstruction in a problem of perturbation of data within a narrow range of simulation parameters and can be used for input data of large size.
arXiv Detail & Related papers (2021-06-01T08:46:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.