Fourier-DeepONet: Fourier-enhanced deep operator networks for full
waveform inversion with improved accuracy, generalizability, and robustness
- URL: http://arxiv.org/abs/2305.17289v2
- Date: Mon, 24 Jul 2023 18:10:47 GMT
- Title: Fourier-DeepONet: Fourier-enhanced deep operator networks for full
waveform inversion with improved accuracy, generalizability, and robustness
- Authors: Min Zhu, Shihang Feng, Youzuo Lin, Lu Lu
- Abstract summary: Full waveform inversion (FWI) infers the structure information from waveform data by solving a non- optimization problem.
Here, we develop a neural network (Fourier-DeepONet) for FWI with the generalization of sources, including the frequencies and locations of sources.
Our experiments demonstrate that Fourier-DeepONet obtains more accurate predictions of subsurface structures in a wide range of source parameters.
- Score: 4.186792090302649
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Full waveform inversion (FWI) infers the subsurface structure information
from seismic waveform data by solving a non-convex optimization problem.
Data-driven FWI has been increasingly studied with various neural network
architectures to improve accuracy and computational efficiency. Nevertheless,
the applicability of pre-trained neural networks is severely restricted by
potential discrepancies between the source function used in the field survey
and the one utilized during training. Here, we develop a Fourier-enhanced deep
operator network (Fourier-DeepONet) for FWI with the generalization of seismic
sources, including the frequencies and locations of sources. Specifically, we
employ the Fourier neural operator as the decoder of DeepONet, and we utilize
source parameters as one input of Fourier-DeepONet, facilitating the resolution
of FWI with variable sources. To test Fourier-DeepONet, we develop three new
and realistic FWI benchmark datasets (FWI-F, FWI-L, and FWI-FL) with varying
source frequencies, locations, or both. Our experiments demonstrate that
compared with existing data-driven FWI methods, Fourier-DeepONet obtains more
accurate predictions of subsurface structures in a wide range of source
parameters. Moreover, the proposed Fourier-DeepONet exhibits superior
robustness when handling data with Gaussian noise or missing traces and sources
with Gaussian noise, paving the way for more reliable and accurate subsurface
imaging across diverse real conditions.
Related papers
- Inversion-DeepONet: A Novel DeepONet-Based Network with Encoder-Decoder for Full Waveform Inversion [28.406887976413845]
We propose a novel deep operator network (DeepONet) architecture Inversion-DeepONet for full waveform inversion (FWI)
We utilize convolutional neural network (CNN) to extract the features from seismic data in branch net.
We confirm the superior performance on accuracy and generalization ability of our network, compared with existing data-driven FWI methods.
arXiv Detail & Related papers (2024-08-15T08:15:06Z) - Frequency-Aware Deepfake Detection: Improving Generalizability through
Frequency Space Learning [81.98675881423131]
This research addresses the challenge of developing a universal deepfake detector that can effectively identify unseen deepfake images.
Existing frequency-based paradigms have relied on frequency-level artifacts introduced during the up-sampling in GAN pipelines to detect forgeries.
We introduce a novel frequency-aware approach called FreqNet, centered around frequency domain learning, specifically designed to enhance the generalizability of deepfake detectors.
arXiv Detail & Related papers (2024-03-12T01:28:00Z) - A Physics-Guided Bi-Fidelity Fourier-Featured Operator Learning
Framework for Predicting Time Evolution of Drag and Lift Coefficients [4.584598411021565]
This paper proposes a deep operator learning-based framework that requires a limited high-fidelity dataset for training.
We introduce a novel physics-guided, bi-fidelity, Fourier-featured Deep Operator Network (DeepONet) framework that effectively combines low and high-fidelity datasets.
We validate our approach using a well-known 2D benchmark cylinder problem, which aims to predict the time trajectories of lift and drag coefficients.
arXiv Detail & Related papers (2023-11-07T00:56:54Z) - WFTNet: Exploiting Global and Local Periodicity in Long-term Time Series
Forecasting [61.64303388738395]
We propose a Wavelet-Fourier Transform Network (WFTNet) for long-term time series forecasting.
Tests on various time series datasets show WFTNet consistently outperforms other state-of-the-art baselines.
arXiv Detail & Related papers (2023-09-20T13:44:18Z) - Transform Once: Efficient Operator Learning in Frequency Domain [69.74509540521397]
We study deep neural networks designed to harness the structure in frequency domain for efficient learning of long-range correlations in space or time.
This work introduces a blueprint for frequency domain learning through a single transform: transform once (T1)
arXiv Detail & Related papers (2022-11-26T01:56:05Z) - Functional Regularization for Reinforcement Learning via Learned Fourier
Features [98.90474131452588]
We propose a simple architecture for deep reinforcement learning by embedding inputs into a learned Fourier basis.
We show that it improves the sample efficiency of both state-based and image-based RL.
arXiv Detail & Related papers (2021-12-06T18:59:52Z) - On the Robustness and Generalization of Deep Learning Driven Full
Waveform Inversion [2.5382095320488665]
Full Waveform Inversion (FWI) is commonly epitomized as an image-to-image translation task.
Despite being trained with synthetic data, the deep learning-driven FWI is expected to perform well when evaluated with sufficient real-world data.
We study such properties by asking: how robust are these deep neural networks and how do they generalize?
arXiv Detail & Related papers (2021-11-28T19:27:59Z) - Factorized Fourier Neural Operators [77.47313102926017]
The Factorized Fourier Neural Operator (F-FNO) is a learning-based method for simulating partial differential equations.
We show that our model maintains an error rate of 2% while still running an order of magnitude faster than a numerical solver.
arXiv Detail & Related papers (2021-11-27T03:34:13Z) - Conditioning Trick for Training Stable GANs [70.15099665710336]
We propose a conditioning trick, called difference departure from normality, applied on the generator network in response to instability issues during GAN training.
We force the generator to get closer to the departure from normality function of real samples computed in the spectral domain of Schur decomposition.
arXiv Detail & Related papers (2020-10-12T16:50:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.