Efficient temporal prediction of compressible flows in irregular domains using Fourier neural operators
- URL: http://arxiv.org/abs/2601.01922v1
- Date: Mon, 05 Jan 2026 09:12:35 GMT
- Title: Efficient temporal prediction of compressible flows in irregular domains using Fourier neural operators
- Authors: Yifan Nie, Qiaoxin Li,
- Abstract summary: This paper investigates the temporal evolution of high-speed compressible fluids in irregular flow fields using the Fourier Operator (FNO)<n>We reconstruct the irregular flow field point set into sequential format compatible with FNO input requirements, and then embed temporal bundling technique within a recurrent neural network (RNN) for multi-step prediction.
- Score: 0.3819617852128932
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper investigates the temporal evolution of high-speed compressible fluids in irregular flow fields using the Fourier Neural Operator (FNO). We reconstruct the irregular flow field point set into sequential format compatible with FNO input requirements, and then embed temporal bundling technique within a recurrent neural network (RNN) for multi-step prediction. We further employ a composite loss function to balance errors across different physical quantities. Experiments are conducted on three different types of irregular flow fields, including orthogonal and non-orthogonal grid configurations. Then we comprehensively analyze the physical component loss curves, flow field visualizations, and physical profiles. Results demonstrate that our approach significantly surpasses traditional numerical methods in computational efficiency while achieving high accuracy, with maximum relative $L_2$ errors of (0.78, 0.57, 0.35)% for ($p$, $T$, $\mathbf{u}$) respectively. This verifies that the method can efficiently and accurately simulate the temporal evolution of high-speed compressible flows in irregular domains.
Related papers
- Physics-informed neural particle flow for the Bayesian update step [0.8220217498103312]
We propose a physics-informed neural particle flow, which is an amortized inference framework.<n>By embedding a governing partial differential equation (PDE) into the loss function, we train a neural network to approximate the transport velocity field.<n>We demonstrate that the neural parameterization acts as an implicit regularizer, mitigating the stiffness inherent to analytic flows.
arXiv Detail & Related papers (2026-02-26T15:10:45Z) - Generative Modeling with Continuous Flows: Sample Complexity of Flow Matching [60.37045080890305]
We provide the first analysis of the sample complexity for flow-matching based generative models.<n>We decompose the velocity field estimation error into neural-network approximation error, statistical error due to the finite sample size, and optimization error due to the finite number of optimization steps for estimating the velocity field.
arXiv Detail & Related papers (2025-12-01T05:14:25Z) - MultiPDENet: PDE-embedded Learning with Multi-time-stepping for Accelerated Flow Simulation [48.41289705783405]
We propose a PDE-embedded network with multiscale time stepping (MultiPDENet)<n>In particular, we design a convolutional filter based on the structure of finite difference with a small number of parameters to optimize.<n>A Physics Block with a 4th-order Runge-Kutta integrator at the fine time scale is established that embeds the structure of PDEs to guide the prediction.
arXiv Detail & Related papers (2025-01-27T12:15:51Z) - Implicit factorized transformer approach to fast prediction of turbulent channel flows [6.70175842351963]
We introduce a modified implicit factorized transformer (IFactFormer-m) model which replaces the original chained factorized attention with parallel factorized attention.<n>The IFactFormer-m model successfully performs long-term predictions for turbulent channel flow.
arXiv Detail & Related papers (2024-12-25T09:05:14Z) - Guaranteed Approximation Bounds for Mixed-Precision Neural Operators [83.64404557466528]
We build on intuition that neural operator learning inherently induces an approximation error.
We show that our approach reduces GPU memory usage by up to 50% and improves throughput by 58% with little or no reduction in accuracy.
arXiv Detail & Related papers (2023-07-27T17:42:06Z) - Forecasting subcritical cylinder wakes with Fourier Neural Operators [58.68996255635669]
We apply a state-of-the-art operator learning technique to forecast the temporal evolution of experimentally measured velocity fields.
We find that FNOs are capable of accurately predicting the evolution of experimental velocity fields throughout the range of Reynolds numbers tested.
arXiv Detail & Related papers (2023-01-19T20:04:36Z) - Transform Once: Efficient Operator Learning in Frequency Domain [69.74509540521397]
We study deep neural networks designed to harness the structure in frequency domain for efficient learning of long-range correlations in space or time.
This work introduces a blueprint for frequency domain learning through a single transform: transform once (T1)
arXiv Detail & Related papers (2022-11-26T01:56:05Z) - Semi-supervised Learning of Partial Differential Operators and Dynamical
Flows [68.77595310155365]
We present a novel method that combines a hyper-network solver with a Fourier Neural Operator architecture.
We test our method on various time evolution PDEs, including nonlinear fluid flows in one, two, and three spatial dimensions.
The results show that the new method improves the learning accuracy at the time point of supervision point, and is able to interpolate and the solutions to any intermediate time.
arXiv Detail & Related papers (2022-07-28T19:59:14Z) - Factorized Fourier Neural Operators [77.47313102926017]
The Factorized Fourier Neural Operator (F-FNO) is a learning-based method for simulating partial differential equations.
We show that our model maintains an error rate of 2% while still running an order of magnitude faster than a numerical solver.
arXiv Detail & Related papers (2021-11-27T03:34:13Z) - Finite volume method network for acceleration of unsteady computational
fluid dynamics: non-reacting and reacting flows [0.0]
A neural network model with a unique network architecture and physics-informed loss function was developed to accelerate CFD simulations.
Under the reacting flow dataset, the computational speed of this network model was measured to be about 10 times faster than that of the CFD solver.
arXiv Detail & Related papers (2021-05-07T15:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.