HFNO: an interpretable data-driven decomposition strategy for turbulent flows
- URL: http://arxiv.org/abs/2511.01535v1
- Date: Mon, 03 Nov 2025 12:57:19 GMT
- Title: HFNO: an interpretable data-driven decomposition strategy for turbulent flows
- Authors: Marco Cayuela, Vincent Le Chenadec, Peter Schmid, Taraneh Sayadi,
- Abstract summary: We present a novel FNO-based architecture tailored for reduced-order modeling of turbulent fluid flows.<n>The proposed architecture processes wavenumber bins in parallel, enabling approximation of dispersion relations and non-linear interactions.<n>We evaluate the proposed model on a series of increasingly complex dynamical systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fourier Neural Operators (FNOs) have demonstrated exceptional accuracy in mapping functional spaces by leveraging Fourier transforms to establish a connection with underlying physical principles. However, their opaque inner workings often constitute an obstacle to physical interpretability. This work introduces Hierarchical Fourier Neural Operators (HFNOs), a novel FNO-based architecture tailored for reduced-order modeling of turbulent fluid flows, designed to enhance interpretability by explicitly separating fluid behavior across scales. The proposed architecture processes wavenumber bins in parallel, enabling the approximation of dispersion relations and non-linear interactions. Inputs are lifted to a higher-dimensional space, Fourier-transformed, and partitioned into wavenumber bins. Each bin is processed by a Fully Connected Neural Network (FCNN), with outputs subsequently padded, summed, and inverse-transformed back into physical space. A final transformation refines the output in physical space as a correction model, by means of one of the following architectures: Convolutional Neural Network (CNN) and Echo State Network (ESN). We evaluate the proposed model on a series of increasingly complex dynamical systems: first on the one-dimensional Kuramoto-Sivashinsky equation, then on the two-dimensional Kolmogorov flow, and finally on the prediction of wall shear stress in turbulent channel flow, given the near-wall velocity field. In all test cases, the model demonstrates its ability to decompose turbulent flows across various scales, opening up the possibility of increased interpretability and multiscale modeling of such flows.
Related papers
- Parallel Complex Diffusion for Scalable Time Series Generation [50.01609741902786]
PaCoDi is a spectral-native architecture that decouples generative modeling in the frequency domain.<n>We show that PaCoDi outperforms existing baselines in both generation quality and inference speed.
arXiv Detail & Related papers (2026-02-10T14:31:53Z) - Physics-Informed Design of Input Convex Neural Networks for Consistency Optimal Transport Flow Matching [1.3709465727733763]
A physics-informed neural input consistency network (PICNN) plays a central role in constructing the flow field that emulates the displacement.<n>During the prediction stage, our approach supports both one-step (Mrenier-map) and multi-step ODE sampling from the same learned potential, leveraging the straightness of the OT flow.
arXiv Detail & Related papers (2025-11-08T15:30:55Z) - Floating-Body Hydrodynamic Neural Networks [8.501171043928354]
We propose a physics-structured framework that predicts interpretable parameters such as directional added masses, drag coefficients, and a streamfunction-based flow, and couples them with analytic equations of motion.<n>Compared with Hamiltonian and Lagrangian neural networks, FHNN more effectively handles dissipative dynamics while preserving interpretability, which bridges the gap between black-box learning and transparent system identification.
arXiv Detail & Related papers (2025-09-17T07:51:35Z) - Equivariant U-Shaped Neural Operators for the Cahn-Hilliard Phase-Field Model [4.79907962230318]
We show that an equivariant U-shaped neural operator (E-UNO) can learn the evolution of the phase-field variable from short histories of past dynamics.<n>By encoding symmetry and scale hierarchy, the model generalizes better, requires less training data, and yields physically consistent dynamics.
arXiv Detail & Related papers (2025-09-01T09:25:31Z) - FLEX: A Backbone for Diffusion-Based Modeling of Spatio-temporal Physical Systems [51.15230303652732]
FLEX (F Low EXpert) is a backbone architecture for generative modeling of-temporal physical systems.<n>It reduces the variance of the velocity field in the diffusion model, which helps stabilize training.<n>It achieves accurate predictions for super-resolution and forecasting tasks using as few features as two reverse diffusion steps.
arXiv Detail & Related papers (2025-05-23T00:07:59Z) - Fourier-Invertible Neural Encoder (FINE) for Homogeneous Flows [4.095418032380801]
Invertible neural networks have attracted attention for their compactness, interpretability, and information-preserving properties.<n>We propose the Fourier-Invertible Neural (FINE), which combines invertible monotonic activation functions with reversible filter structures.
arXiv Detail & Related papers (2025-05-21T10:02:59Z) - FlowDPS: Flow-Driven Posterior Sampling for Inverse Problems [51.99765487172328]
Posterior sampling for inverse problem solving can be effectively achieved using flows.<n>Flow-Driven Posterior Sampling (FlowDPS) outperforms state-of-the-art alternatives.
arXiv Detail & Related papers (2025-03-11T07:56:14Z) - Fourier Neural Operators for Learning Dynamics in Quantum Spin Systems [77.88054335119074]
We use FNOs to model the evolution of random quantum spin systems.
We apply FNOs to a compact set of Hamiltonian observables instead of the entire $2n$ quantum wavefunction.
arXiv Detail & Related papers (2024-09-05T07:18:09Z) - Physics-embedded Fourier Neural Network for Partial Differential Equations [35.41134465442465]
We introduce Physics-embedded Fourier Neural Networks (PeFNN) with flexible and explainable error.
PeFNN is designed to enforce momentum conservation and yields interpretable nonlinear expressions.
We demonstrate its outstanding performance for challenging real-world applications such as large-scale flood simulations.
arXiv Detail & Related papers (2024-07-15T18:30:39Z) - Fast Sampling of Diffusion Models via Operator Learning [74.37531458470086]
We use neural operators, an efficient method to solve the probability flow differential equations, to accelerate the sampling process of diffusion models.
Compared to other fast sampling methods that have a sequential nature, we are the first to propose a parallel decoding method.
We show our method achieves state-of-the-art FID of 3.78 for CIFAR-10 and 7.83 for ImageNet-64 in the one-model-evaluation setting.
arXiv Detail & Related papers (2022-11-24T07:30:27Z) - Factorized Fourier Neural Operators [77.47313102926017]
The Factorized Fourier Neural Operator (F-FNO) is a learning-based method for simulating partial differential equations.
We show that our model maintains an error rate of 2% while still running an order of magnitude faster than a numerical solver.
arXiv Detail & Related papers (2021-11-27T03:34:13Z) - Moser Flow: Divergence-based Generative Modeling on Manifolds [49.04974733536027]
Moser Flow (MF) is a new class of generative models within the family of continuous normalizing flows (CNF)
MF does not require invoking or backpropagating through an ODE solver during training.
We demonstrate for the first time the use of flow models for sampling from general curved surfaces.
arXiv Detail & Related papers (2021-08-18T09:00:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.