Physically Guided Deep Unsupervised Inversion for 1D Magnetotelluric Models
- URL: http://arxiv.org/abs/2410.15274v1
- Date: Sun, 20 Oct 2024 04:17:59 GMT
- Title: Physically Guided Deep Unsupervised Inversion for 1D Magnetotelluric Models
- Authors: Paul Goyes-PeƱafiel, Umair bin Waheed, Henry Arguello,
- Abstract summary: We present a new deep inversion algorithm guided by physics to estimate 1D Magnetotelluric (MT) models.
Our method employs a differentiable modeling operator that physically guides the cost function minimization.
We test the proposed method with field and synthetic data at different frequencies, demonstrating that the acquisition models are more accurate than other results.
- Score: 16.91835461818938
- License:
- Abstract: The global demand for unconventional energy sources such as geothermal energy and white hydrogen requires new exploration techniques for precise subsurface structure characterization and potential reservoir identification. Magnetotelluric (MT) inversion is crucial for these tasks, providing critical information on the distribution of subsurface electrical resistivity at depths ranging from hundreds to thousands of meters. However, traditional iterative algorithm-based inversion methods require the adjustment of multiple parameters, demanding time-consuming and exhaustive tuning processes to achieve proper cost function minimization. Although recent advances have incorporated deep learning algorithms for MT inversion, these have been primarily based on supervised learning, which needs large labeled datasets for training. Therefore, it causes issues in generalization and model characteristics that are restricted to the neural network's features. This work utilizes TensorFlow operations to create a differentiable forward MT operator, leveraging its automatic differentiation capability. Moreover, instead of solving for the subsurface model directly, as classical algorithms perform, this paper presents a new deep unsupervised inversion algorithm guided by physics to estimate 1D MT models. Instead of using datasets with the observed data and their respective model as labels during training, our method employs a differentiable modeling operator that physically guides the cost function minimization, making the proposed method solely dependent on observed data. Therefore, the optimization problem is updating the network weights to minimize the data misfit. We test the proposed method with field and synthetic data at different acquisition frequencies, demonstrating that the resistivity models are more accurate than other results using state-of-the-art techniques.
Related papers
- Physics-Driven Self-Supervised Deep Learning for Free-Surface Multiple Elimination [3.3244277562036095]
In geophysics, deep learning (DL) methods are commonly based on supervised learning from large amounts of high-quality labelled data.
We propose a method in which the DL model learns to effectively parameterize the free-surface multiple-free wavefield from the full wavefield by incorporating the underlying physics into the loss computation.
This, in turn, yields high-quality estimates without ever being shown any ground truth data.
arXiv Detail & Related papers (2025-01-26T15:37:23Z) - chemtrain: Learning Deep Potential Models via Automatic Differentiation and Statistical Physics [0.0]
Neural Networks (NNs) are promising models for refining the accuracy of molecular dynamics.
Chemtrain is a framework to learn sophisticated NN potential models through customizable training routines and advanced training algorithms.
arXiv Detail & Related papers (2024-08-28T15:14:58Z) - Self-STORM: Deep Unrolled Self-Supervised Learning for Super-Resolution Microscopy [55.2480439325792]
We introduce deep unrolled self-supervised learning, which alleviates the need for such data by training a sequence-specific, model-based autoencoder.
Our proposed method exceeds the performance of its supervised counterparts.
arXiv Detail & Related papers (2024-03-25T17:40:32Z) - Predictive Maintenance Model Based on Anomaly Detection in Induction
Motors: A Machine Learning Approach Using Real-Time IoT Data [0.0]
In this work, we demonstrate a novel anomaly detection system on induction motors used in pumps, compressors, fans, and other industrial machines.
We use a combination of pre-processing techniques and machine learning (ML) models with a low computational cost.
arXiv Detail & Related papers (2023-10-15T18:43:45Z) - Large-Scale OD Matrix Estimation with A Deep Learning Method [70.78575952309023]
The proposed method integrates deep learning and numerical optimization algorithms to infer matrix structure and guide numerical optimization.
We conducted tests to demonstrate the good generalization performance of our method on a large-scale synthetic dataset.
arXiv Detail & Related papers (2023-10-09T14:30:06Z) - Training Deep Surrogate Models with Large Scale Online Learning [48.7576911714538]
Deep learning algorithms have emerged as a viable alternative for obtaining fast solutions for PDEs.
Models are usually trained on synthetic data generated by solvers, stored on disk and read back for training.
It proposes an open source online training framework for deep surrogate models.
arXiv Detail & Related papers (2023-06-28T12:02:27Z) - A machine learning approach to the prediction of heat-transfer
coefficients in micro-channels [4.724825031148412]
The accurate prediction of the two-phase heat transfer coefficient (HTC) is key to the optimal design and operation of compact heat exchangers.
We use a multi-output Gaussian process regression (GPR) to estimate the HTC in microchannels as a function of the mass flow rate, heat flux, system pressure and channel diameter and length.
arXiv Detail & Related papers (2023-05-28T15:48:01Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - Energy-Efficient and Federated Meta-Learning via Projected Stochastic
Gradient Ascent [79.58680275615752]
We propose an energy-efficient federated meta-learning framework.
We assume each task is owned by a separate agent, so a limited number of tasks is used to train a meta-model.
arXiv Detail & Related papers (2021-05-31T08:15:44Z) - Low-Rank Hankel Tensor Completion for Traffic Speed Estimation [7.346671461427793]
We propose a purely data-driven and model-free solution to the traffic state estimation problem.
By imposing a low-rank assumption on this tensor structure, we can approximate characterize both global patterns and the unknown complex local dynamics.
We conduct numerical experiments on both synthetic simulation data and real-world high-resolution data, and our results demonstrate the effectiveness and superiority of the proposed model.
arXiv Detail & Related papers (2021-05-21T00:08:06Z) - A Near-Optimal Gradient Flow for Learning Neural Energy-Based Models [93.24030378630175]
We propose a novel numerical scheme to optimize the gradient flows for learning energy-based models (EBMs)
We derive a second-order Wasserstein gradient flow of the global relative entropy from Fokker-Planck equation.
Compared with existing schemes, Wasserstein gradient flow is a smoother and near-optimal numerical scheme to approximate real data densities.
arXiv Detail & Related papers (2019-10-31T02:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.