Prediction of wall-bounded turbulence from wall quantities using
convolutional neural networks
- URL: http://arxiv.org/abs/1912.12969v1
- Date: Mon, 30 Dec 2019 15:34:41 GMT
- Title: Prediction of wall-bounded turbulence from wall quantities using
convolutional neural networks
- Authors: L. Guastoni, M. P. Encinar, P. Schlatter, H. Azizpour, R. Vinuesa
- Abstract summary: A fully-convolutional neural-network model is used to predict the streamwise velocity fields at several wall-normal locations.
Various networks are trained for predictions at three inner-scaled locations.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A fully-convolutional neural-network model is used to predict the streamwise
velocity fields at several wall-normal locations by taking as input the
streamwise and spanwise wall-shear-stress planes in a turbulent open channel
flow. The training data are generated by performing a direct numerical
simulation (DNS) at a friction Reynolds number of $Re_{\tau}=180$. Various
networks are trained for predictions at three inner-scaled locations ($y^+ =
15,~30,~50$) and for different time steps between input samples $\Delta
t^{+}_{s}$. The inherent non-linearity of the neural-network model enables a
better prediction capability than linear methods, with a lower error in both
the instantaneous flow fields and turbulent statistics. Using a dataset with
higher $\Delta t^+_{s}$ improves the generalization at all the considered
wall-normal locations, as long as the network capacity is sufficient to
generalize over the dataset. The use of a multiple-output network, with
parallel dedicated branches for two wall-normal locations, does not provide any
improvement over two separated single-output networks, other than a moderate
saving in training time. Training time can be effectively reduced, by a factor
of 4, via a transfer learning method that initializes the network parameters
using the optimized parameters of a previously-trained network.
Related papers
- Kronecker-Factored Approximate Curvature for Modern Neural Network
Architectures [85.76673783330334]
Two different settings of linear weight-sharing layers motivate two flavours of Kronecker-Factored Approximate Curvature (K-FAC)
We show they are exact for deep linear networks with weight-sharing in their respective setting.
We observe little difference between these two K-FAC variations when using them to train both a graph neural network and a vision transformer.
arXiv Detail & Related papers (2023-11-01T16:37:00Z) - Sampling weights of deep neural networks [1.2370077627846041]
We introduce a probability distribution, combined with an efficient sampling algorithm, for weights and biases of fully-connected neural networks.
In a supervised learning context, no iterative optimization or gradient computations of internal network parameters are needed.
We prove that sampled networks are universal approximators.
arXiv Detail & Related papers (2023-06-29T10:13:36Z) - Predicting the wall-shear stress and wall pressure through convolutional
neural networks [1.95992742032823]
This study aims to assess the capability of convolution-based neural networks to predict wall quantities in a turbulent open channel flow.
The predictions from the FCN are compared against the predictions from a proposed R-Net architecture.
The R-Net is also able to predict the wall-shear-stress and wall-pressure fields using the velocity-fluctuation fields at $y+ = 50$.
arXiv Detail & Related papers (2023-03-01T18:03:42Z) - Predicting the near-wall region of turbulence through convolutional
neural networks [0.0]
A neural-network-based approach to predict the near-wall behaviour in a turbulent open channel flow is investigated.
The fully-convolutional network (FCN) is trained to predict the two-dimensional velocity-fluctuation fields at $y+_rm target$.
FCN can take advantage of the self-similarity in the logarithmic region of the flow and predict the velocity-fluctuation fields at $y+ = 50$.
arXiv Detail & Related papers (2021-07-15T13:58:26Z) - The Rate of Convergence of Variation-Constrained Deep Neural Networks [35.393855471751756]
We show that a class of variation-constrained neural networks can achieve near-parametric rate $n-1/2+delta$ for an arbitrarily small constant $delta$.
The result indicates that the neural function space needed for approximating smooth functions may not be as large as what is often perceived.
arXiv Detail & Related papers (2021-06-22T21:28:00Z) - Towards an Understanding of Benign Overfitting in Neural Networks [104.2956323934544]
Modern machine learning models often employ a huge number of parameters and are typically optimized to have zero training loss.
We examine how these benign overfitting phenomena occur in a two-layer neural network setting.
We show that it is possible for the two-layer ReLU network interpolator to achieve a near minimax-optimal learning rate.
arXiv Detail & Related papers (2021-06-06T19:08:53Z) - Learning Neural Network Subspaces [74.44457651546728]
Recent observations have advanced our understanding of the neural network optimization landscape.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
arXiv Detail & Related papers (2021-02-20T23:26:58Z) - Convolutional-network models to predict wall-bounded turbulence from
wall quantities [0.0]
Two models are trained to predict the two-dimensional velocity-fluctuation fields at different wall-normal locations in a turbulent open channel flow.
The first model is a fully-convolutional neural network (FCN) which directly predicts the fluctuations.
The second one reconstructs the flow fields using a linear combination of orthonormal basis functions.
arXiv Detail & Related papers (2020-06-22T17:57:40Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z) - Neural Networks are Convex Regularizers: Exact Polynomial-time Convex
Optimization Formulations for Two-layer Networks [70.15611146583068]
We develop exact representations of training two-layer neural networks with rectified linear units (ReLUs)
Our theory utilizes semi-infinite duality and minimum norm regularization.
arXiv Detail & Related papers (2020-02-24T21:32:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.