Physics-aware deep neural networks for surrogate modeling of turbulent
natural convection
- URL: http://arxiv.org/abs/2103.03565v1
- Date: Fri, 5 Mar 2021 09:48:57 GMT
- Title: Physics-aware deep neural networks for surrogate modeling of turbulent
natural convection
- Authors: Didier Lucor (LISN), Atul Agrawal (TUM, LISN), Anne Sergent (LISN, UFR
919)
- Abstract summary: We investigate the use of PINNs surrogate modeling for turbulent Rayleigh-B'enard convection flows.
We show how it comes to play as a regularization close to the training boundaries which are zones of poor accuracy for standard PINNs.
The predictive accuracy of the surrogate over the entire half a billion DNS coordinates yields errors for all flow variables ranging between [0.3% -- 4%] in the relative L 2 norm.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent works have explored the potential of machine learning as data-driven
turbulence closures for RANS and LES techniques. Beyond these advances, the
high expressivity and agility of physics-informed neural networks (PINNs) make
them promising candidates for full fluid flow PDE modeling. An important
question is whether this new paradigm, exempt from the traditional notion of
discretization of the underlying operators very much connected to the flow
scales resolution, is capable of sustaining high levels of turbulence
characterized by multi-scale features? We investigate the use of PINNs
surrogate modeling for turbulent Rayleigh-B{\'e}nard (RB) convection flows in
rough and smooth rectangular cavities, mainly relying on DNS temperature data
from the fluid bulk. We carefully quantify the computational requirements under
which the formulation is capable of accurately recovering the flow hidden
quantities. We then propose a new padding technique to distribute some of the
scattered coordinates-at which PDE residuals are minimized-around the region of
labeled data acquisition. We show how it comes to play as a regularization
close to the training boundaries which are zones of poor accuracy for standard
PINNs and results in a noticeable global accuracy improvement at iso-budget.
Finally, we propose for the first time to relax the incompressibility condition
in such a way that it drastically benefits the optimization search and results
in a much improved convergence of the composite loss function. The RB results
obtained at high Rayleigh number Ra = 2 $\bullet$ 10 9 are particularly
impressive: the predictive accuracy of the surrogate over the entire half a
billion DNS coordinates yields errors for all flow variables ranging between
[0.3% -- 4%] in the relative L 2 norm, with a training relying only on 1.6% of
the DNS data points.
Related papers
- Using Parametric PINNs for Predicting Internal and External Turbulent Flows [6.387263468033964]
We build upon the previously proposed RANS-PINN framework, which only focused on predicting flow over a cylinder.
We investigate its accuracy in predicting relevant turbulent flow variables for both internal and external flows.
arXiv Detail & Related papers (2024-10-24T17:08:20Z) - Guaranteed Approximation Bounds for Mixed-Precision Neural Operators [83.64404557466528]
We build on intuition that neural operator learning inherently induces an approximation error.
We show that our approach reduces GPU memory usage by up to 50% and improves throughput by 58% with little or no reduction in accuracy.
arXiv Detail & Related papers (2023-07-27T17:42:06Z) - Speed Limits for Deep Learning [67.69149326107103]
Recent advancement in thermodynamics allows bounding the speed at which one can go from the initial weight distribution to the final distribution of the fully trained network.
We provide analytical expressions for these speed limits for linear and linearizable neural networks.
Remarkably, given some plausible scaling assumptions on the NTK spectra and spectral decomposition of the labels -- learning is optimal in a scaling sense.
arXiv Detail & Related papers (2023-07-27T06:59:46Z) - Sparsifying Bayesian neural networks with latent binary variables and
normalizing flows [10.865434331546126]
We will consider two extensions to the latent binary Bayesian neural networks (LBBNN) method.
Firstly, by using the local reparametrization trick (LRT) to sample the hidden units directly, we get a more computationally efficient algorithm.
More importantly, by using normalizing flows on the variational posterior distribution of the LBBNN parameters, the network learns a more flexible variational posterior distribution than the mean field Gaussian.
arXiv Detail & Related papers (2023-05-05T09:40:28Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Deep Equilibrium Optical Flow Estimation [80.80992684796566]
Recent state-of-the-art (SOTA) optical flow models use finite-step recurrent update operations to emulate traditional algorithms.
These RNNs impose large computation and memory overheads, and are not directly trained to model such stable estimation.
We propose deep equilibrium (DEQ) flow estimators, an approach that directly solves for the flow as the infinite-level fixed point of an implicit layer.
arXiv Detail & Related papers (2022-04-18T17:53:44Z) - Accelerated replica exchange stochastic gradient Langevin diffusion
enhanced Bayesian DeepONet for solving noisy parametric PDEs [7.337247167823921]
We propose a training framework for replica-exchange Langevin diffusion that exploits the neural network architecture of DeepONets.
We show that the proposed framework's exploration and exploitation capabilities enable improved training convergence for DeepONets in noisy scenarios.
We also show that replica-exchange Langeving Diffusion also improves the DeepONet's mean prediction accuracy in noisy scenarios.
arXiv Detail & Related papers (2021-11-03T19:23:59Z) - Estimating permeability of 3D micro-CT images by physics-informed CNNs
based on DNS [1.6274397329511197]
This paper presents a novel methodology for permeability prediction from micro-CT scans of geological rock samples.
The training data set for CNNs dedicated to permeability prediction consists of permeability labels that are typically generated by classical lattice Boltzmann methods (LBM)
We instead perform direct numerical simulation (DNS) by solving the stationary Stokes equation in an efficient and distributed-parallel manner.
arXiv Detail & Related papers (2021-09-04T08:43:19Z) - Multi-fidelity Bayesian Neural Networks: Algorithms and Applications [0.0]
We propose a new class of Bayesian neural networks (BNNs) that can be trained using noisy data of variable fidelity.
We apply them to learn function approximations as well as to solve inverse problems based on partial differential equations (PDEs)
arXiv Detail & Related papers (2020-12-19T02:03:53Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.