Physics-informed waveform inversion using pretrained wavefield neural operators
- URL: http://arxiv.org/abs/2509.08967v2
- Date: Wed, 22 Oct 2025 04:33:19 GMT
- Title: Physics-informed waveform inversion using pretrained wavefield neural operators
- Authors: Xinquan Huang, Fu Wang, Tariq Alkhalifah,
- Abstract summary: Full waveform inversion (FWI) is crucial for reconstructing high-resolution subsurface models.<n>Recent attempts to accelerate FWI using learned wavefield neural operators have shown promise in efficiency and differentiability.<n>We introduce a novel physics-informed FWI framework to enhance the inversion in accuracy while maintaining the efficiency of neural operator-based FWI.
- Score: 9.048550821334116
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Full waveform inversion (FWI) is crucial for reconstructing high-resolution subsurface models, but it is often hindered, considering the limited data, by its null space resulting in low-resolution models, and more importantly, by its computational cost, especially if needed for real-time applications. Recent attempts to accelerate FWI using learned wavefield neural operators have shown promise in efficiency and differentiability, but typically suffer from noisy and unstable inversion performance. To address these limitations, we introduce a novel physics-informed FWI framework to enhance the inversion in accuracy while maintaining the efficiency of neural operator-based FWI. Instead of relying only on the L2 norm objective function via automatic differentiation, resulting in noisy model reconstruction, we integrate a physics constraint term in the loss function of FWI, improving the quality of the inverted velocity models. Specifically, starting with an initial model to simulate wavefields and then evaluating the loss over how much the resulting wavefield obeys the physical laws (wave equation) and matches the recorded data, we achieve a reduction in noise and artifacts. Numerical experiments using the OpenFWI and Overthrust models demonstrate our method's superior performance, offering cleaner and more accurate subsurface velocity than vanilla approaches. Considering the efficiency of the approach compared to FWI, this advancement represents a significant step forward in the practical application of FWI for real-time subsurface monitoring.
Related papers
- Noise Hypernetworks: Amortizing Test-Time Compute in Diffusion Models [57.49136894315871]
New paradigm of test-time scaling has yielded remarkable breakthroughs in reasoning models and generative vision models.<n>We propose one solution to the problem of integrating test-time scaling knowledge into a model during post-training.<n>We replace reward guided test-time noise optimization in diffusion models with a Noise Hypernetwork that modulates initial input noise.
arXiv Detail & Related papers (2025-08-13T17:33:37Z) - An effective physics-informed neural operator framework for predicting wavefields [10.94738894332709]
We introduce a physics-informed convolutional neural operator (PICNO) to solve the Helmholtz equation efficiently.<n> PICNO takes the background wavefield corresponding to a homogeneous medium and the velocity model as input function space, generating the scattered wavefield as the output function space.<n>It allows for high-resolution reasonably accurate predictions even with limited training samples.
arXiv Detail & Related papers (2025-07-22T10:22:30Z) - Diffusion prior as a direct regularization term for FWI [0.0]
We propose a score-based generative diffusion prior into Full Waveform Inversion (FWI)<n>Unlike traditional diffusion approaches, our method avoids the reverse diffusion sampling and needs fewer iterations.<n>The proposed method offers enhanced fidelity and robustness compared to conventional and GAN-based FWI approaches.
arXiv Detail & Related papers (2025-06-11T19:43:23Z) - Full waveform inversion with CNN-based velocity representation extension [4.255346660147713]
Full waveform inversion (FWI) updates the velocity model by minimizing the discrepancy between observed and simulated data.<n>Discretization errors in numerical modeling and incomplete seismic data acquisition can introduce noise, which propagates through the adjoint operator.<n>We employ a convolutional neural network (CNN) to refine the velocity model before performing the forward simulation.<n>We use the same data misfit loss to update both the velocity and network parameters, thereby forming a self-supervised learning procedure.
arXiv Detail & Related papers (2025-04-22T12:14:38Z) - A new practical and effective source-independent full-waveform inversion with a velocity-distribution supported deep image prior: Applications to two real datasets [6.802692977157491]
Full-waveform inversion (FWI) is an advanced technique for reconstructing high-resolution subsurface physical parameters.<n>We introduce a correlation-based source-independent objective function for FWI that aims to mitigate source uncertainty and amplitude dependency.<n>We demonstrate the superiority of our proposed method using synthetic data from benchmark velocity models and two real datasets.
arXiv Detail & Related papers (2025-03-01T23:15:43Z) - DispFormer: A Pretrained Transformer Incorporating Physical Constraints for Dispersion Curve Inversion [56.64622091009756]
This study introduces DispFormer, a transformer-based neural network for $v_s$ profile inversion from Rayleigh-wave phase and group dispersion curves.<n>DispFormer processes dispersion data independently at each period, allowing it to handle varying lengths without requiring network modifications or strict alignment between training and testing datasets.
arXiv Detail & Related papers (2025-01-08T09:08:24Z) - KFD-NeRF: Rethinking Dynamic NeRF with Kalman Filter [49.85369344101118]
We introduce KFD-NeRF, a novel dynamic neural radiance field integrated with an efficient and high-quality motion reconstruction framework based on Kalman filtering.
Our key idea is to model the dynamic radiance field as a dynamic system whose temporally varying states are estimated based on two sources of knowledge: observations and predictions.
Our KFD-NeRF demonstrates similar or even superior performance within comparable computational time and state-of-the-art view synthesis performance with thorough training.
arXiv Detail & Related papers (2024-07-18T05:48:24Z) - Guaranteed Approximation Bounds for Mixed-Precision Neural Operators [83.64404557466528]
We build on intuition that neural operator learning inherently induces an approximation error.
We show that our approach reduces GPU memory usage by up to 50% and improves throughput by 58% with little or no reduction in accuracy.
arXiv Detail & Related papers (2023-07-27T17:42:06Z) - Machine learning for phase-resolved reconstruction of nonlinear ocean
wave surface elevations from sparse remote sensing data [37.69303106863453]
We propose a novel approach for phase-resolved wave surface reconstruction using neural networks.
Our approach utilizes synthetic yet highly realistic training data on uniform one-dimensional grids.
arXiv Detail & Related papers (2023-05-18T12:30:26Z) - Transform Once: Efficient Operator Learning in Frequency Domain [69.74509540521397]
We study deep neural networks designed to harness the structure in frequency domain for efficient learning of long-range correlations in space or time.
This work introduces a blueprint for frequency domain learning through a single transform: transform once (T1)
arXiv Detail & Related papers (2022-11-26T01:56:05Z) - Solving Seismic Wave Equations on Variable Velocity Models with Fourier
Neural Operator [3.2307366446033945]
We propose a new framework paralleled Fourier neural operator (PFNO) for efficiently training the FNO-based solver.
Numerical experiments demonstrate the high accuracy of both FNO and PFNO with complicated velocity models.
PFNO admits higher computational efficiency on large-scale testing datasets, compared with the traditional finite-difference method.
arXiv Detail & Related papers (2022-09-25T22:25:57Z) - Fourier Space Losses for Efficient Perceptual Image Super-Resolution [131.50099891772598]
We show that it is possible to improve the performance of a recently introduced efficient generator architecture solely with the application of our proposed loss functions.
We show that our losses' direct emphasis on the frequencies in Fourier-space significantly boosts the perceptual image quality.
The trained generator achieves comparable results with and is 2.4x and 48x faster than state-of-the-art perceptual SR methods RankSRGAN and SRFlow respectively.
arXiv Detail & Related papers (2021-06-01T20:34:52Z) - Real Time Speech Enhancement in the Waveform Domain [99.02180506016721]
We present a causal speech enhancement model working on the raw waveform that runs in real-time on a laptop CPU.
The proposed model is based on an encoder-decoder architecture with skip-connections.
It is capable of removing various kinds of background noise including stationary and non-stationary noises.
arXiv Detail & Related papers (2020-06-23T09:19:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.