Multi-resolution Physics-Aware Recurrent Convolutional Neural Network for Complex Flows
- URL: http://arxiv.org/abs/2512.06031v1
- Date: Thu, 04 Dec 2025 16:19:10 GMT
- Title: Multi-resolution Physics-Aware Recurrent Convolutional Neural Network for Complex Flows
- Authors: Xinlun Cheng, Joseph Choi, H. S. Udaykumar, Stephen Baek,
- Abstract summary: MRPARCv2 is designed to model complex flows by embedding the structure of advection-diffusion-reaction equations.<n>We evaluate the model on a challenging 2D turbulent radiative layer dataset from The Well multi-physics benchmark repository.
- Score: 2.7233737247962786
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We present MRPARCv2, Multi-resolution Physics-Aware Recurrent Convolutional Neural Network, designed to model complex flows by embedding the structure of advection-diffusion-reaction equations and leveraging a multi-resolution architecture. MRPARCv2 introduces hierarchical discretization and cross-resolution feature communication to improve the accuracy and efficiency of flow simulations. We evaluate the model on a challenging 2D turbulent radiative layer dataset from The Well multi-physics benchmark repository and demonstrate significant improvements when compared to the single resolution baseline model, in both Variance Scaled Root Mean Squared Error and physics-driven metrics, including turbulent kinetic energy spectra and mass-temperature distributions. Despite having 30% fewer trainable parameters, MRPARCv2 outperforms its predecessor by up to 50% in roll-out prediction error and 86% in spectral error. A preliminary study on uncertainty quantification was performed, and we also analyzed the model's performance under different levels of abstractions of the flow, specifically on sampling subsets of field variables. We find that the absence of physical constraints on the equation of state (EOS) in the network architecture leads to degraded accuracy. A variable substitution experiment confirms that this issue persists regardless of which physical quantity is predicted directly. Our findings highlight the advantages of multi-resolution inductive bias for capturing multi-scale flow dynamics and suggest the need for future PIML models to embed EOS knowledge to enhance physical fidelity.
Related papers
- Advancing accelerator virtual beam diagnostics through latent evolution modeling: an integrated solution to forward, inverse, tuning, and UQ problems [46.568487965999225]
We propose Latent Evolution Model (LEM), a hybrid machine learning framework with an autoencoder that projects high-dimensional phase spaces into lower-dimensional representations, and transformers to learn temporal dynamics in the latent space.<n>For textitforward modeling, a CVAE encodes 15 unique projections of the 6D phase space into a latent representation, while a transformer predicts downstream latent states from upstream inputs.<n>For textitinverse problems, we address two distinct challenges: (a) predicting upstream phase spaces from downstream observations, and (b) estimating RF settings from the latent space of
arXiv Detail & Related papers (2026-02-26T04:46:26Z) - Generative Deep Learning for the Two-Dimensional Quantum Rotor Model [7.545403823716431]
In this work, we design two models based on the foundational architecture of generative adversarial networks (GANs)<n>Within a semi-supervised learning framework, we incorporate multiple layers of transposed convolutions in the generator.<n>Analysis of one-dimensional latent variables associated with ground-state samples for different system sizes allows us to pinpoint the location of the critical point.
arXiv Detail & Related papers (2026-02-24T11:06:16Z) - KITINet: Kinetics Theory Inspired Network Architectures with PDE Simulation Approaches [43.872190335490515]
This paper introduces KITINet, a novel architecture that reinterprets feature propagation through the lens of non-equilibrium particle dynamics.<n>At its core, we propose a residual module that models update as the evolution of a particle system.<n>This formulation mimics particle collisions and energy exchange, enabling adaptive feature refinement via physics-informed interactions.
arXiv Detail & Related papers (2025-05-23T13:58:29Z) - KO: Kinetics-inspired Neural Optimizer with PDE Simulation Approaches [45.173398806932376]
This paper introduces KO, a novel neural gradient inspired by kinetic theory and partial differential equation (PDE) simulations.<n>We reimagine the dynamics of network parameters as the evolution of a particle system governed by kinetic principles.<n>This physics-driven approach inherently promotes parameter diversity during optimization, mitigating the phenomenon of parameter condensation.
arXiv Detail & Related papers (2025-05-20T18:00:01Z) - Physics-guided and fabrication-aware inverse design of photonic devices using diffusion models [43.51581973358462]
We present AdjointDiffusion, a physics-guided framework that integrates adjoint gradient sensitivity into the sampling process of diffusion models.<n>Our method consistently outperforms state-of-the-art nonlinear gradient approaches in both efficiency and manufacturability.
arXiv Detail & Related papers (2025-04-23T19:54:33Z) - Hybrid machine learning models based on physical patterns to accelerate CFD simulations: a short guide on autoregressive models [3.780691701083858]
This study presents an innovative integration of High-Order Singular Value Decomposition with Long Short-Term Memory (LSTM) architectures to address the complexities of reduced-order modeling (ROM) in fluid dynamics.<n>The methodology is tested across numerical and experimental data sets, including two- and three-dimensional (2D and 3D) cylinder wake flows, spanning both laminar and turbulent regimes.<n>The results demonstrate that HOSVD outperforms SVD in all tested scenarios, as evidenced by using different error metrics.
arXiv Detail & Related papers (2025-04-09T10:56:03Z) - DeepFEA: Deep Learning for Prediction of Transient Finite Element Analysis Solutions [2.9784611307466187]
Finite Element Analysis (FEA) is a powerful but computationally intensive method for simulating physical phenomena.<n>Recent advancements in machine learning have led to surrogate models capable of accelerating FEA.<n>Motivated by this research gap, this study proposes DeepFEA, a deep learning-based framework.
arXiv Detail & Related papers (2024-12-05T12:46:18Z) - Stochastic Flow Matching for Resolving Small-Scale Physics [28.25905372253442]
In physical sciences such as weather, super-resolving small-scale details poses significant challenges.
We propose encoding the inputs to a latent base distribution, followed by flow matching to generate small-scale physics.
We conduct extensive experiments on both the real-world CWA weather dataset and the PDE-based Kolmogorov dataset.
arXiv Detail & Related papers (2024-10-17T21:09:13Z) - Ensemble Kalman Filtering Meets Gaussian Process SSM for Non-Mean-Field and Online Inference [47.460898983429374]
We introduce an ensemble Kalman filter (EnKF) into the non-mean-field (NMF) variational inference framework to approximate the posterior distribution of the latent states.
This novel marriage between EnKF and GPSSM not only eliminates the need for extensive parameterization in learning variational distributions, but also enables an interpretable, closed-form approximation of the evidence lower bound (ELBO)
We demonstrate that the resulting EnKF-aided online algorithm embodies a principled objective function by ensuring data-fitting accuracy while incorporating model regularizations to mitigate overfitting.
arXiv Detail & Related papers (2023-12-10T15:22:30Z) - A Neural PDE Solver with Temporal Stencil Modeling [44.97241931708181]
Recent Machine Learning (ML) models have shown new promises in capturing important dynamics in high-resolution signals.
This study shows that significant information is often lost in the low-resolution down-sampled features.
We propose a new approach, which combines the strengths of advanced time-series sequence modeling and state-of-the-art neural PDE solvers.
arXiv Detail & Related papers (2023-02-16T06:13:01Z) - A predictive physics-aware hybrid reduced order model for reacting flows [65.73506571113623]
A new hybrid predictive Reduced Order Model (ROM) is proposed to solve reacting flow problems.
The number of degrees of freedom is reduced from thousands of temporal points to a few POD modes with their corresponding temporal coefficients.
Two different deep learning architectures have been tested to predict the temporal coefficients.
arXiv Detail & Related papers (2023-01-24T08:39:20Z) - Forecasting through deep learning and modal decomposition in two-phase
concentric jets [2.362412515574206]
This work aims to improve fuel chamber injectors' performance in turbofan engines.
It requires the development of models that allow real-time prediction and improvement of the fuel/air mixture.
arXiv Detail & Related papers (2022-12-24T12:59:41Z) - Multi-fidelity Hierarchical Neural Processes [79.0284780825048]
Multi-fidelity surrogate modeling reduces the computational cost by fusing different simulation outputs.
We propose Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-fidelity surrogate modeling.
We evaluate MF-HNP on epidemiology and climate modeling tasks, achieving competitive performance in terms of accuracy and uncertainty estimation.
arXiv Detail & Related papers (2022-06-10T04:54:13Z) - Equivariant vector field network for many-body system modeling [65.22203086172019]
Equivariant Vector Field Network (EVFN) is built on a novel equivariant basis and the associated scalarization and vectorization layers.
We evaluate our method on predicting trajectories of simulated Newton mechanics systems with both full and partially observed data.
arXiv Detail & Related papers (2021-10-26T14:26:25Z) - MoEfication: Conditional Computation of Transformer Models for Efficient
Inference [66.56994436947441]
Transformer-based pre-trained language models can achieve superior performance on most NLP tasks due to large parameter capacity, but also lead to huge computation cost.
We explore to accelerate large-model inference by conditional computation based on the sparse activation phenomenon.
We propose to transform a large model into its mixture-of-experts (MoE) version with equal model size, namely MoEfication.
arXiv Detail & Related papers (2021-10-05T02:14:38Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.