Predicting the near-wall region of turbulence through convolutional
neural networks
- URL: http://arxiv.org/abs/2107.07340v1
- Date: Thu, 15 Jul 2021 13:58:26 GMT
- Title: Predicting the near-wall region of turbulence through convolutional
neural networks
- Authors: A. G. Balasubramanian, L. Guastoni, A. G\"uemes, A. Ianiro, S.
Discetti, P. Schlatter, H. Azizpour, R. Vinuesa
- Abstract summary: A neural-network-based approach to predict the near-wall behaviour in a turbulent open channel flow is investigated.
The fully-convolutional network (FCN) is trained to predict the two-dimensional velocity-fluctuation fields at $y+_rm target$.
FCN can take advantage of the self-similarity in the logarithmic region of the flow and predict the velocity-fluctuation fields at $y+ = 50$.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modelling the near-wall region of wall-bounded turbulent flows is a
widespread practice to reduce the computational cost of large-eddy simulations
(LESs) at high Reynolds number. As a first step towards a data-driven
wall-model, a neural-network-based approach to predict the near-wall behaviour
in a turbulent open channel flow is investigated. The fully-convolutional
network (FCN) proposed by Guastoni et al. [preprint, arXiv:2006.12483] is
trained to predict the two-dimensional velocity-fluctuation fields at
$y^{+}_{\rm target}$, using the sampled fluctuations in wall-parallel planes
located farther from the wall, at $y^{+}_{\rm input}$. The data for training
and testing is obtained from a direct numerical simulation (DNS) at friction
Reynolds numbers $Re_{\tau} = 180$ and $550$. The turbulent
velocity-fluctuation fields are sampled at various wall-normal locations, i.e.
$y^{+} = \{15, 30, 50, 80, 100, 120, 150\}$. At $Re_{\tau}=550$, the FCN can
take advantage of the self-similarity in the logarithmic region of the flow and
predict the velocity-fluctuation fields at $y^{+} = 50$ using the
velocity-fluctuation fields at $y^{+} = 100$ as input with less than 20% error
in prediction of streamwise-fluctuations intensity. These results are an
encouraging starting point to develop a neural-network based approach for
modelling turbulence at the wall in numerical simulations.
Related papers
- Bayesian Inference with Deep Weakly Nonlinear Networks [57.95116787699412]
We show at a physics level of rigor that Bayesian inference with a fully connected neural network is solvable.
We provide techniques to compute the model evidence and posterior to arbitrary order in $1/N$ and at arbitrary temperature.
arXiv Detail & Related papers (2024-05-26T17:08:04Z) - Normalizing flows as approximations of optimal transport maps via linear-control neural ODEs [49.1574468325115]
"Normalizing Flows" is related to the task of constructing invertible transport maps between probability measures by means of deep neural networks.
We consider the problem of recovering the $Wamma$-optimal transport map $T$ between absolutely continuous measures $mu,nuinmathcalP(mathbbRn)$ as the flow of a linear-control neural ODE.
arXiv Detail & Related papers (2023-11-02T17:17:03Z) - Simulation-free Schr\"odinger bridges via score and flow matching [89.4231207928885]
We present simulation-free score and flow matching ([SF]$2$M)
Our method generalizes both the score-matching loss used in the training of diffusion models and the recently proposed flow matching loss used in the training of continuous flows.
Notably, [SF]$2$M is the first method to accurately model cell dynamics in high dimensions and can recover known gene regulatory networks simulated data.
arXiv Detail & Related papers (2023-07-07T15:42:35Z) - Wide neural networks: From non-gaussian random fields at initialization
to the NTK geometry of training [0.0]
Recent developments in applications of artificial neural networks with over $n=1014$ parameters make it extremely important to study the large $n$ behaviour of such networks.
Most works studying wide neural networks have focused on the infinite width $n to +infty$ limit of such networks.
In this work we will study their behavior for large, but finite $n$.
arXiv Detail & Related papers (2023-04-06T21:34:13Z) - Predicting the wall-shear stress and wall pressure through convolutional
neural networks [1.95992742032823]
This study aims to assess the capability of convolution-based neural networks to predict wall quantities in a turbulent open channel flow.
The predictions from the FCN are compared against the predictions from a proposed R-Net architecture.
The R-Net is also able to predict the wall-shear-stress and wall-pressure fields using the velocity-fluctuation fields at $y+ = 50$.
arXiv Detail & Related papers (2023-03-01T18:03:42Z) - Forecasting subcritical cylinder wakes with Fourier Neural Operators [58.68996255635669]
We apply a state-of-the-art operator learning technique to forecast the temporal evolution of experimentally measured velocity fields.
We find that FNOs are capable of accurately predicting the evolution of experimental velocity fields throughout the range of Reynolds numbers tested.
arXiv Detail & Related papers (2023-01-19T20:04:36Z) - Physics-informed compressed sensing for PC-MRI: an inverse Navier-Stokes
problem [78.20667552233989]
We formulate a physics-informed compressed sensing (PICS) method for the reconstruction of velocity fields from noisy and sparse magnetic resonance signals.
We find that the method is capable of reconstructing and segmenting the velocity fields from sparsely-sampled signals.
arXiv Detail & Related papers (2022-07-04T14:51:59Z) - Bounding the Width of Neural Networks via Coupled Initialization -- A
Worst Case Analysis [121.9821494461427]
We show how to significantly reduce the number of neurons required for two-layer ReLU networks.
We also prove new lower bounds that improve upon prior work, and that under certain assumptions, are best possible.
arXiv Detail & Related papers (2022-06-26T06:51:31Z) - Fast, high-fidelity Lyman $\alpha$ forests with convolutional neural
networks [1.0499611180329804]
We train a convolutional neural network to reconstruct the baryon hydrodynamic variables on scales relevant to the Lyman-$alpha$ (Ly$alpha$) forest.
Our method enables rapid estimation of these fields at a resolution of $sim$20kpc, and captures the statistics of the Ly$alpha$ forest with much greater accuracy than existing approximations.
arXiv Detail & Related papers (2021-06-23T21:41:47Z) - Convolutional-network models to predict wall-bounded turbulence from
wall quantities [0.0]
Two models are trained to predict the two-dimensional velocity-fluctuation fields at different wall-normal locations in a turbulent open channel flow.
The first model is a fully-convolutional neural network (FCN) which directly predicts the fluctuations.
The second one reconstructs the flow fields using a linear combination of orthonormal basis functions.
arXiv Detail & Related papers (2020-06-22T17:57:40Z) - Prediction of wall-bounded turbulence from wall quantities using
convolutional neural networks [0.0]
A fully-convolutional neural-network model is used to predict the streamwise velocity fields at several wall-normal locations.
Various networks are trained for predictions at three inner-scaled locations.
arXiv Detail & Related papers (2019-12-30T15:34:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.