Physical invariance in neural networks for subgrid-scale scalar flux
modeling
- URL: http://arxiv.org/abs/2010.04663v4
- Date: Mon, 1 Mar 2021 15:58:38 GMT
- Title: Physical invariance in neural networks for subgrid-scale scalar flux
modeling
- Authors: Hugo Frezat, Guillaume Balarac, Julien Le Sommer, Ronan Fablet,
Redouane Lguensat
- Abstract summary: We present a new strategy to model the subgrid-scale scalar flux in a three-dimensional turbulent incompressible flow using physics-informed neural networks (NNs)
We show that the proposed transformation-invariant NN model outperforms both purely data-driven ones and parametric state-of-the-art subgrid-scale models.
- Score: 5.333802479607541
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper we present a new strategy to model the subgrid-scale scalar
flux in a three-dimensional turbulent incompressible flow using
physics-informed neural networks (NNs). When trained from direct numerical
simulation (DNS) data, state-of-the-art neural networks, such as convolutional
neural networks, may not preserve well known physical priors, which may in turn
question their application to real case-studies. To address this issue, we
investigate hard and soft constraints into the model based on classical
transformation invariances and symmetries derived from physical laws. From
simulation-based experiments, we show that the proposed
transformation-invariant NN model outperforms both purely data-driven ones as
well as parametric state-of-the-art subgrid-scale models. The considered
invariances are regarded as regularizers on physical metrics during the a
priori evaluation and constrain the distribution tails of the predicted
subgrid-scale term to be closer to the DNS. They also increase the stability
and performance of the model when used as a surrogate during a large-eddy
simulation. Moreover, the transformation-invariant NN is shown to generalize to
regimes that have not been seen during the training phase.
Related papers
- Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Scalable Bayesian Inference in the Era of Deep Learning: From Gaussian Processes to Deep Neural Networks [0.5827521884806072]
Large neural networks trained on large datasets have become the dominant paradigm in machine learning.
This thesis develops scalable methods to equip neural networks with model uncertainty.
arXiv Detail & Related papers (2024-04-29T23:38:58Z) - MMGP: a Mesh Morphing Gaussian Process-based machine learning method for
regression of physical problems under non-parameterized geometrical
variability [0.30693357740321775]
We propose a machine learning method that do not rely on graph neural networks.
The proposed methodology can easily deal with large meshes without the need for explicit shape parameterization.
arXiv Detail & Related papers (2023-05-22T09:50:15Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Dynamical Hyperspectral Unmixing with Variational Recurrent Neural
Networks [25.051918587650636]
Multitemporal hyperspectral unmixing (MTHU) is a fundamental tool in the analysis of hyperspectral image sequences.
We propose an unsupervised MTHU algorithm based on variational recurrent neural networks.
arXiv Detail & Related papers (2023-03-19T04:51:34Z) - On feedforward control using physics-guided neural networks: Training
cost regularization and optimized initialization [0.0]
Performance of model-based feedforward controllers is typically limited by the accuracy of the inverse system dynamics model.
This paper proposes a regularization method via identified physical parameters.
It is validated on a real-life industrial linear motor, where it delivers better tracking accuracy and extrapolation.
arXiv Detail & Related papers (2022-01-28T12:51:25Z) - Revisiting Transformation Invariant Geometric Deep Learning: Are Initial
Representations All You Need? [80.86819657126041]
We show that transformation-invariant and distance-preserving initial representations are sufficient to achieve transformation invariance.
Specifically, we realize transformation-invariant and distance-preserving initial point representations by modifying multi-dimensional scaling.
We prove that TinvNN can strictly guarantee transformation invariance, being general and flexible enough to be combined with the existing neural networks.
arXiv Detail & Related papers (2021-12-23T03:52:33Z) - Equivariant vector field network for many-body system modeling [65.22203086172019]
Equivariant Vector Field Network (EVFN) is built on a novel equivariant basis and the associated scalarization and vectorization layers.
We evaluate our method on predicting trajectories of simulated Newton mechanics systems with both full and partially observed data.
arXiv Detail & Related papers (2021-10-26T14:26:25Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.