Feature-Modulated UFNO for Improved Prediction of Multiphase Flow in Porous Media
- URL: http://arxiv.org/abs/2511.20543v1
- Date: Tue, 25 Nov 2025 17:44:28 GMT
- Title: Feature-Modulated UFNO for Improved Prediction of Multiphase Flow in Porous Media
- Authors: Alhasan Abdellatif, Hannah P. Menke, Ahmed H. Elsheikh, Florian Doster, Kamaljit Singh,
- Abstract summary: We introduce UFNO-FiLM, an enhanced architecture that incorporates two key innovations.<n>First, we decouple scalar inputs from spatial features using a Feature-wise Linear Modulation layer.<n>Second, we employ a spatially weighted loss function that prioritizes learning in critical regions.
- Score: 0.39146761527401425
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The UNet-enhanced Fourier Neural Operator (UFNO) extends the Fourier Neural Operator (FNO) by incorporating a parallel UNet pathway, enabling the retention of both high- and low-frequency components. While UFNO improves predictive accuracy over FNO, it inefficiently treats scalar inputs (e.g., temperature, injection rate) as spatially distributed fields by duplicating their values across the domain. This forces the model to process redundant constant signals within the frequency domain. Additionally, its standard loss function does not account for spatial variations in error sensitivity, limiting performance in regions of high physical importance. We introduce UFNO-FiLM, an enhanced architecture that incorporates two key innovations. First, we decouple scalar inputs from spatial features using a Feature-wise Linear Modulation (FiLM) layer, allowing the model to modulate spatial feature maps without introducing constant signals into the Fourier transform. Second, we employ a spatially weighted loss function that prioritizes learning in critical regions. Our experiments on subsurface multiphase flow demonstrate a 21\% reduction in gas saturation Mean Absolute Error (MAE) compared to UFNO, highlighting the effectiveness of our approach in improving predictive accuracy.
Related papers
- Pseudo-differential-enhanced physics-informed neural networks [0.0]
Pseudo-differential enhanced physics-informed neural networks (PINNs)<n>We present PINNs, an extension of enhancement but in Fourier space.<n>Our methods oftentimes achieve superior PINN versus numerical error in fewer training.
arXiv Detail & Related papers (2026-02-16T11:40:58Z) - Self-Supervised Learning via Flow-Guided Neural Operator on Time-Series Data [57.85958428020496]
Flow-Guided Neural Operator (FGNO) is a novel framework combining operator learning with flow matching for SSL training.<n>FGNO learns mappings in functional spaces by using Short-Time Fourier Transform to unify different time resolutions.<n>Unlike prior generative SSL methods that use noisy inputs during inference, we propose using clean inputs for representation extraction while learning representations with noise.
arXiv Detail & Related papers (2026-02-12T18:54:57Z) - SAOT: An Enhanced Locality-Aware Spectral Transformer for Solving PDEs [8.678387342998613]
We investigate incorporating the spatial-frequency localization property of Wavelet transforms into the Transformer architecture.<n>We propose a novel Wavelet Attention (WA) module with linear computational complexity to efficiently learn locality-aware features.<n>We further develop the Spectral Attention Operator Transformer (SAOT), a hybrid spectral Transformer framework that integrates WA's localized focus with the global receptive field of Fourier-based Attention.
arXiv Detail & Related papers (2025-11-24T05:22:28Z) - Physics-informed waveform inversion using pretrained wavefield neural operators [9.048550821334116]
Full waveform inversion (FWI) is crucial for reconstructing high-resolution subsurface models.<n>Recent attempts to accelerate FWI using learned wavefield neural operators have shown promise in efficiency and differentiability.<n>We introduce a novel physics-informed FWI framework to enhance the inversion in accuracy while maintaining the efficiency of neural operator-based FWI.
arXiv Detail & Related papers (2025-09-10T19:57:18Z) - Enhancing Fourier Neural Operators with Local Spatial Features [16.887523262913234]
We introduce a convolutional neural network (CNN)-based feature pre-extractor to capture Local Spatial Features (LSFs) directly from input data.<n>Our findings show that this simple yet impactful modification enhances the representational capacity of FNOs.
arXiv Detail & Related papers (2025-03-22T15:11:56Z) - Incremental Spatial and Spectral Learning of Neural Operators for
Solving Large-Scale PDEs [86.35471039808023]
We introduce the Incremental Fourier Neural Operator (iFNO), which progressively increases the number of frequency modes used by the model.
We show that iFNO reduces total training time while maintaining or improving generalization performance across various datasets.
Our method demonstrates a 10% lower testing error, using 20% fewer frequency modes compared to the existing Fourier Neural Operator, while also achieving a 30% faster training.
arXiv Detail & Related papers (2022-11-28T09:57:15Z) - Transform Once: Efficient Operator Learning in Frequency Domain [69.74509540521397]
We study deep neural networks designed to harness the structure in frequency domain for efficient learning of long-range correlations in space or time.
This work introduces a blueprint for frequency domain learning through a single transform: transform once (T1)
arXiv Detail & Related papers (2022-11-26T01:56:05Z) - FAMLP: A Frequency-Aware MLP-Like Architecture For Domain Generalization [73.41395947275473]
We propose a novel frequency-aware architecture, in which the domain-specific features are filtered out in the transformed frequency domain.
Experiments on three benchmarks demonstrate significant performance, outperforming the state-of-the-art methods by a margin of 3%, 4% and 9%, respectively.
arXiv Detail & Related papers (2022-03-24T07:26:29Z) - Factorized Fourier Neural Operators [77.47313102926017]
The Factorized Fourier Neural Operator (F-FNO) is a learning-based method for simulating partial differential equations.
We show that our model maintains an error rate of 2% while still running an order of magnitude faster than a numerical solver.
arXiv Detail & Related papers (2021-11-27T03:34:13Z) - Learning Frequency Domain Approximation for Binary Neural Networks [68.79904499480025]
We propose to estimate the gradient of sign function in the Fourier frequency domain using the combination of sine functions for training BNNs.
The experiments on several benchmark datasets and neural architectures illustrate that the binary network learned using our method achieves the state-of-the-art accuracy.
arXiv Detail & Related papers (2021-03-01T08:25:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.