Multi-fidelity graph-based neural networks architectures to learn Navier-Stokes solutions on non-parametrized 2D domains
- URL: http://arxiv.org/abs/2601.02157v1
- Date: Mon, 05 Jan 2026 14:35:59 GMT
- Title: Multi-fidelity graph-based neural networks architectures to learn Navier-Stokes solutions on non-parametrized 2D domains
- Authors: Francesco Songia, Raoul Sallé de Chou, Hugues Talbot, Irene Vignon-Clementel,
- Abstract summary: We propose a graph-based, multi-fidelity learning framework for the prediction of stationary Navier--Stokes solutions in non-parametrized two-dimensional geometries.<n>The method is designed to guide the learning process through successive approximations, starting from reduced-order and full Stokes models.
- Score: 0.6554302736811937
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a graph-based, multi-fidelity learning framework for the prediction of stationary Navier--Stokes solutions in non-parametrized two-dimensional geometries. The method is designed to guide the learning process through successive approximations, starting from reduced-order and full Stokes models, and progressively approaching the Navier--Stokes solution. To effectively capture both local and long-range dependencies in the velocity and pressure fields, we combine graph neural networks with Transformer and Mamba architectures. While Transformers achieve the highest accuracy, we show that Mamba can be successfully adapted to graph-structured data through an unsupervised node-ordering strategy. The Mamba approach significantly reduces computational cost while maintaining performance. Physical knowledge is embedded directly into the architecture through an encoding -- processing -- physics informed decoding pipeline. Derivatives are computed through algebraic operators constructed via the Weighted Least Squares method. The flexibility of these operators allows us not only to make the output obey the governing equations, but also to constrain selected hidden features to satisfy mass conservation. We introduce additional physical biases through an enriched graph convolution with the same differential operators describing the PDEs. Overall, we successfully guide the learning process by physical knowledge and fluid dynamics insights, leading to more regular and accurate predictions
Related papers
- Graph Signal Generative Diffusion Models [74.75869068073577]
We introduce U-shaped encoder-decoder graph neural networks (U-GNNs) for graph signal generation using denoising diffusion processes.<n>The architecture learns node features at different resolutions with skip connections between the encoder and decoder paths.<n>We demonstrate the effectiveness of the diffusion model in probabilistic forecasting of stock prices.
arXiv Detail & Related papers (2025-09-21T21:57:27Z) - PICT -- A Differentiable, GPU-Accelerated Multi-Block PISO Solver for Simulation-Coupled Learning Tasks in Fluid Dynamics [62.93137406343609]
We present our fluid simulator PICT, a differentiable pressure-implicit solver coded in PyTorch with Graphics-processing-unit (GPU) support.<n>We first verify the accuracy of both the forward simulation and our derived gradients in various established benchmarks.<n>We show that the gradients provided by our solver can be used to learn complicated turbulence models in 2D and 3D.
arXiv Detail & Related papers (2025-05-22T17:55:10Z) - Online Proximal ADMM for Graph Learning from Streaming Smooth Signals [9.34612743192798]
We develop a novel algorithm for online graph learning using observation streams, assumed to be smooth on the latent graph.
Our modus operandi is to process graph signals sequentially and thus keep memory and computational costs in check.
The proposed approach also exhibits better tracking performance (in terms of suboptimality) when compared to state-of-the-art online graph learning baselines.
arXiv Detail & Related papers (2024-09-19T17:12:03Z) - WiNet: Wavelet-based Incremental Learning for Efficient Medical Image Registration [68.25711405944239]
Deep image registration has demonstrated exceptional accuracy and fast inference.
Recent advances have adopted either multiple cascades or pyramid architectures to estimate dense deformation fields in a coarse-to-fine manner.
We introduce a model-driven WiNet that incrementally estimates scale-wise wavelet coefficients for the displacement/velocity field across various scales.
arXiv Detail & Related papers (2024-07-18T11:51:01Z) - Neuroexplicit Diffusion Models for Inpainting of Optical Flow Fields [8.282495481952784]
We show how to bring model- and data-driven approaches together by combining the explicit PDE-based approaches with convolutional neural networks.
Our model outperforms both fully explicit and fully data-driven baselines in terms of reconstruction quality, robustness and amount of required training data.
arXiv Detail & Related papers (2024-05-23T14:14:27Z) - Differentiable DG with Neural Operator Source Term Correction [0.0]
We introduce an end-to-end differentiable framework for solving the compressible Navier-Stokes equations.<n>This integrated approach combines a differentiable discontinuous Galerkin solver with a neural network source term.<n>We demonstrate the performance of the proposed framework through two examples.
arXiv Detail & Related papers (2023-10-29T04:26:23Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - Data-heterogeneity-aware Mixing for Decentralized Learning [63.83913592085953]
We characterize the dependence of convergence on the relationship between the mixing weights of the graph and the data heterogeneity across nodes.
We propose a metric that quantifies the ability of a graph to mix the current gradients.
Motivated by our analysis, we propose an approach that periodically and efficiently optimize the metric.
arXiv Detail & Related papers (2022-04-13T15:54:35Z) - Physics-constrained Unsupervised Learning of Partial Differential
Equations using Meshes [1.066048003460524]
Graph neural networks show promise in accurately representing irregularly meshed objects and learning their dynamics.
In this work, we represent meshes naturally as graphs, process these using Graph Networks, and formulate our physics-based loss to provide an unsupervised learning framework for partial differential equations (PDE)
Our framework will enable the application of PDE solvers in interactive settings, such as model-based control of soft-body deformations.
arXiv Detail & Related papers (2022-03-30T19:22:56Z) - Improved architectures and training algorithms for deep operator
networks [0.0]
Operator learning techniques have emerged as a powerful tool for learning maps between infinite-dimensional Banach spaces.
We analyze the training dynamics of deep operator networks (DeepONets) through the lens of Neural Tangent Kernel (NTK) theory.
arXiv Detail & Related papers (2021-10-04T18:34:41Z) - Neural Weighted A*: Learning Graph Costs and Heuristics with
Differentiable Anytime A* [12.117737635879037]
Recent works related to data-driven planning aim at learning either cost functions or planner functions, but not both.
We propose Neural Weighted A*, a differentiable anytime planner able to produce improved representations of planar maps as graph costs and planners.
We experimentally show the validity of our claims by testing Neural Weighted A* against several baselines, introducing a novel, tile-based navigation dataset.
arXiv Detail & Related papers (2021-05-04T13:17:30Z) - Geometrically Principled Connections in Graph Neural Networks [66.51286736506658]
We argue geometry should remain the primary driving force behind innovation in the emerging field of geometric deep learning.
We relate graph neural networks to widely successful computer graphics and data approximation models: radial basis functions (RBFs)
We introduce affine skip connections, a novel building block formed by combining a fully connected layer with any graph convolution operator.
arXiv Detail & Related papers (2020-04-06T13:25:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.