Convolutional-network models to predict wall-bounded turbulence from
wall quantities
- URL: http://arxiv.org/abs/2006.12483v1
- Date: Mon, 22 Jun 2020 17:57:40 GMT
- Title: Convolutional-network models to predict wall-bounded turbulence from
wall quantities
- Authors: L. Guastoni, A. G\"uemes, A.Ianiro, S. Discetti, P. Schlatter, H.
Azizpour, R. Vinuesa
- Abstract summary: Two models are trained to predict the two-dimensional velocity-fluctuation fields at different wall-normal locations in a turbulent open channel flow.
The first model is a fully-convolutional neural network (FCN) which directly predicts the fluctuations.
The second one reconstructs the flow fields using a linear combination of orthonormal basis functions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Two models based on convolutional neural networks are trained to predict the
two-dimensional velocity-fluctuation fields at different wall-normal locations
in a turbulent open channel flow, using the wall-shear-stress components and
the wall pressure as inputs. The first model is a fully-convolutional neural
network (FCN) which directly predicts the fluctuations, while the second one
reconstructs the flow fields using a linear combination of orthonormal basis
functions, obtained through proper orthogonal decomposition (POD), hence named
FCN-POD. Both models are trained using data from two direct numerical
simulations (DNS) at friction Reynolds numbers $Re_{\tau} = 180$ and $550$.
Thanks to their ability to predict the nonlinear interactions in the flow, both
models show a better prediction performance than the extended proper orthogonal
decomposition (EPOD), which establishes a linear relation between input and
output fields. The performance of the various models is compared based on
predictions of the instantaneous fluctuation fields, turbulence statistics and
power-spectral densities. The FCN exhibits the best predictions closer to the
wall, whereas the FCN-POD model provides better predictions at larger
wall-normal distances. We also assessed the feasibility of performing transfer
learning for the FCN model, using the weights from $Re_{\tau}=180$ to
initialize those of the $Re_{\tau}=550$ case. Our results indicate that it is
possible to obtain a performance similar to that of the reference model up to
$y^{+}=50$, with $50\%$ and $25\%$ of the original training data. These
non-intrusive sensing models will play an important role in applications
related to closed-loop control of wall-bounded turbulence.
Related papers
- Amortizing intractable inference in diffusion models for vision, language, and control [89.65631572949702]
This paper studies amortized sampling of the posterior over data, $mathbfxsim prm post(mathbfx)propto p(mathbfx)r(mathbfx)$, in a model that consists of a diffusion generative model prior $p(mathbfx)$ and a black-box constraint or function $r(mathbfx)$.
We prove the correctness of a data-free learning objective, relative trajectory balance, for training a diffusion model that samples from
arXiv Detail & Related papers (2024-05-31T16:18:46Z) - SE(3)-Stochastic Flow Matching for Protein Backbone Generation [54.951832422425454]
We introduce FoldFlow, a series of novel generative models of increasing modeling power based on the flow-matching paradigm over $3mathrmD$ rigid motions.
Our family of FoldFlowgenerative models offers several advantages over previous approaches to the generative modeling of proteins.
arXiv Detail & Related papers (2023-10-03T19:24:24Z) - Towards Faster Non-Asymptotic Convergence for Diffusion-Based Generative
Models [49.81937966106691]
We develop a suite of non-asymptotic theory towards understanding the data generation process of diffusion models.
In contrast to prior works, our theory is developed based on an elementary yet versatile non-asymptotic approach.
arXiv Detail & Related papers (2023-06-15T16:30:08Z) - Predicting the wall-shear stress and wall pressure through convolutional
neural networks [1.95992742032823]
This study aims to assess the capability of convolution-based neural networks to predict wall quantities in a turbulent open channel flow.
The predictions from the FCN are compared against the predictions from a proposed R-Net architecture.
The R-Net is also able to predict the wall-shear-stress and wall-pressure fields using the velocity-fluctuation fields at $y+ = 50$.
arXiv Detail & Related papers (2023-03-01T18:03:42Z) - An advanced spatio-temporal convolutional recurrent neural network for
storm surge predictions [73.4962254843935]
We study the capability of artificial neural network models to emulate storm surge based on the storm track/size/intensity history.
This study presents a neural network model that can predict storm surge, informed by a database of synthetic storm simulations.
arXiv Detail & Related papers (2022-04-18T23:42:18Z) - Surrogate Model for Shallow Water Equations Solvers with Deep Learning [6.123836425156534]
This work introduces an efficient, accurate, and flexible surrogate model, NN-p2p, based on deep learning.
The input includes both spatial coordinates and boundary features that can describe the geometry of hydraulic structures.
NN-p2p has good performance in predicting flow around piers unseen by the neural network.
arXiv Detail & Related papers (2021-12-20T22:30:11Z) - Predicting the near-wall region of turbulence through convolutional
neural networks [0.0]
A neural-network-based approach to predict the near-wall behaviour in a turbulent open channel flow is investigated.
The fully-convolutional network (FCN) is trained to predict the two-dimensional velocity-fluctuation fields at $y+_rm target$.
FCN can take advantage of the self-similarity in the logarithmic region of the flow and predict the velocity-fluctuation fields at $y+ = 50$.
arXiv Detail & Related papers (2021-07-15T13:58:26Z) - Towards an Understanding of Benign Overfitting in Neural Networks [104.2956323934544]
Modern machine learning models often employ a huge number of parameters and are typically optimized to have zero training loss.
We examine how these benign overfitting phenomena occur in a two-layer neural network setting.
We show that it is possible for the two-layer ReLU network interpolator to achieve a near minimax-optimal learning rate.
arXiv Detail & Related papers (2021-06-06T19:08:53Z) - Neural Networks are Convex Regularizers: Exact Polynomial-time Convex
Optimization Formulations for Two-layer Networks [70.15611146583068]
We develop exact representations of training two-layer neural networks with rectified linear units (ReLUs)
Our theory utilizes semi-infinite duality and minimum norm regularization.
arXiv Detail & Related papers (2020-02-24T21:32:41Z) - Gravitational-wave parameter estimation with autoregressive neural
network flows [0.0]
We introduce the use of autoregressive normalizing flows for rapid likelihood-free inference of binary black hole system parameters from gravitational-wave data with deep neural networks.
A normalizing flow is an invertible mapping on a sample space that can be used to induce a transformation from a simple probability distribution to a more complex one.
We build a more powerful latent variable model by incorporating autoregressive flows within the variational autoencoder framework.
arXiv Detail & Related papers (2020-02-18T15:44:04Z) - Prediction of wall-bounded turbulence from wall quantities using
convolutional neural networks [0.0]
A fully-convolutional neural-network model is used to predict the streamwise velocity fields at several wall-normal locations.
Various networks are trained for predictions at three inner-scaled locations.
arXiv Detail & Related papers (2019-12-30T15:34:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.