Performance of Machine Learning Methods for Gravity Inversion: Successes and Challenges
- URL: http://arxiv.org/abs/2510.09632v1
- Date: Sun, 28 Sep 2025 19:19:07 GMT
- Title: Performance of Machine Learning Methods for Gravity Inversion: Successes and Challenges
- Authors: Vahid Negahdari, Shirin Samadi Bahrami, Seyed Reza Moghadasi, Mohammad Reza Razvan,
- Abstract summary: Recent advances in machine learning have motivated data-driven approaches for gravity inversion.<n>We first design a convolutional neural network trained to directly map gravity anomalies to density fields.<n>To further investigate generative modeling, we employ Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs)
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Gravity inversion is the problem of estimating subsurface density distributions from observed gravitational field data. We consider the two-dimensional (2D) case, in which recovering density models from one-dimensional (1D) measurements leads to an underdetermined system with substantially more model parameters than measurements, making the inversion ill-posed and non-unique. Recent advances in machine learning have motivated data-driven approaches for gravity inversion. We first design a convolutional neural network (CNN) trained to directly map gravity anomalies to density fields, where a customized data structure is introduced to enhance the inversion performance. To further investigate generative modeling, we employ Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs), reformulating inversion as a latent-space optimization constrained by the forward operator. In addition, we assess whether classical iterative solvers such as Gradient Descent (GD), GMRES, LGMRES, and a recently proposed Improved Conjugate Gradient (ICG) method can refine CNN-based initial guesses and improve inversion accuracy. Our results demonstrate that CNN inversion not only provides the most reliable reconstructions but also significantly outperforms previously reported methods. Generative models remain promising but unstable, and iterative solvers offer only marginal improvements, underscoring the persistent ill-posedness of gravity inversion.
Related papers
- Enhanced 3D Gravity Inversion Using ResU-Net with Density Logging Constraints: A Dual-Phase Training Approach [10.808596914579544]
Data-driven gravity inversion methods based on deep learning (DL) possess physical property recovery capabilities that conventional regularization methods lack.<n>Existing DL methods suffer from insufficient prior information constraints, which leads to inversion models with large data fitting errors and unreliable results.<n>We propose a novel approach that integrates prior density well logging information to address the above issues.
arXiv Detail & Related papers (2026-01-06T10:24:11Z) - Sparse Model Inversion: Efficient Inversion of Vision Transformers for Data-Free Applications [99.72917069918485]
We propose a novel sparse model inversion strategy to speed up existing dense inversion methods.<n>Specifically, we invert semantic foregrounds while stopping the inversion of noisy backgrounds and potential spurious correlations.
arXiv Detail & Related papers (2025-10-31T05:14:36Z) - Generative Model Inversion Through the Lens of the Manifold Hypothesis [98.37040155914595]
Model inversion attacks (MIAs) aim to reconstruct class-representative samples from trained models.<n>Recent generative MIAs utilize generative adversarial networks to learn image priors that guide the inversion process.
arXiv Detail & Related papers (2025-09-24T14:39:25Z) - Weight Spectra Induced Efficient Model Adaptation [54.8615621415845]
Fine-tuning large-scale foundation models incurs prohibitive computational costs.<n>We show that fine-tuning predominantly amplifies the top singular values while leaving the remainder largely intact.<n>We propose a novel method that leverages learnable rescaling of top singular directions.
arXiv Detail & Related papers (2025-05-29T05:03:29Z) - Gradient Inversion Transcript: Leveraging Robust Generative Priors to Reconstruct Training Data from Gradient Leakage [3.012404329139943]
Gradient Inversion Transcript (GIT) is a novel generative approach for reconstructing training data from leaked gradients.<n>GIT consistently outperforms existing methods across multiple datasets.
arXiv Detail & Related papers (2025-05-26T14:17:00Z) - Restoration Score Distillation: From Corrupted Diffusion Pretraining to One-Step High-Quality Generation [82.39763984380625]
We propose textitRestoration Score Distillation (RSD), a principled generalization of Denoising Score Distillation (DSD)<n>RSD accommodates a broader range of corruption types, such as blurred, incomplete, or low-resolution images.<n>It consistently surpasses its teacher model across diverse restoration tasks on both natural and scientific datasets.
arXiv Detail & Related papers (2025-05-19T17:21:03Z) - Mean flow data assimilation using physics-constrained Graph Neural Networks [0.0]
This study introduces a novel data assimilation approach that integrates Graph Neural Networks (GNNs) with optimisation techniques to enhance the accuracy of mean flow reconstruction.<n>The GNN framework is well-suited for handling unstructured data, which is common in the complex geometries encountered in Computational Fluid Dynamics (CFD)<n>Results demonstrate significant improvements in the accuracy of mean flow reconstructions, even with limited training data, compared to analogous purely data-driven models.
arXiv Detail & Related papers (2024-11-14T14:31:52Z) - Subsurface Characterization using Ensemble-based Approaches with Deep
Generative Models [2.184775414778289]
Inverse modeling is limited for ill-posed, high-dimensional applications due to computational costs and poor prediction accuracy with sparse datasets.
We combine Wasserstein Geneversarative Adrial Network with Gradient Penalty (WGAN-GP) and Ensemble Smoother with Multiple Data Assimilation (ES-MDA)
WGAN-GP is trained to generate high-dimensional K fields from a low-dimensional latent space and ES-MDA updates the latent variables by assimilating available measurements.
arXiv Detail & Related papers (2023-10-02T01:27:10Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Low-rank Tensor Assisted K-space Generative Model for Parallel Imaging
Reconstruction [14.438899814473446]
We present a new idea, low-rank tensor assisted k-space generative model (LR-KGM) for parallel imaging reconstruction.
This means that we transform original prior information into high-dimensional prior information for learning.
Experimental comparisons with the state-of-the-arts demonstrated that the proposed LR-KGM method achieved better performance.
arXiv Detail & Related papers (2022-12-11T13:34:43Z) - Generative models and Bayesian inversion using Laplace approximation [0.3670422696827525]
Recently, inverse problems were solved using generative models as highly informative priors.
We show that derived Bayes estimates are consistent, in contrast to the approach employing the low-dimensional manifold of the generative model.
arXiv Detail & Related papers (2022-03-15T10:05:43Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.