Neural network based generation of a 1-dimensional stochastic field with
turbulent velocity statistics
- URL: http://arxiv.org/abs/2211.11580v3
- Date: Thu, 7 Dec 2023 09:01:24 GMT
- Title: Neural network based generation of a 1-dimensional stochastic field with
turbulent velocity statistics
- Authors: Carlos Granero-Belinchon (ODYSSEY, IMT Atlantique - MEE,
Lab-STICC\_OSE)
- Abstract summary: We study a fully-convolutional neural network model, NN-Turb, which generates a 1-dimensional field with turbulent velocity statistics.
Our model is never in contact with turbulent data and only needs the desired statistical behavior of the structure functions across scales for training.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We define and study a fully-convolutional neural network stochastic model,
NN-Turb, which generates a 1-dimensional field with some turbulent velocity
statistics. In particular, the generated process satisfies the Kolmogorov 2/3
law for second order structure function. It also presents negative skewness
across scales (i.e. Kolmogorov 4/5 law) and exhibits intermittency as
characterized by skewness and flatness. Furthermore, our model is never in
contact with turbulent data and only needs the desired statistical behavior of
the structure functions across scales for training.
Related papers
- Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - A Dynamical Model of Neural Scaling Laws [79.59705237659547]
We analyze a random feature model trained with gradient descent as a solvable model of network training and generalization.
Our theory shows how the gap between training and test loss can gradually build up over time due to repeated reuse of data.
arXiv Detail & Related papers (2024-02-02T01:41:38Z) - A multiscale and multicriteria Generative Adversarial Network to synthesize 1-dimensional turbulent fields [0.0]
This article introduces a new Neural Network model to generate a 1-dimensional field with turbulent velocity statistics.
Both the model architecture and training procedure ground on the Kolmogorov and Obukhov statistical theories of fully developed turbulence.
To train our model we use turbulent velocity signals from grid turbulence at Modane wind tunnel.
arXiv Detail & Related papers (2023-07-31T11:34:41Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Gradient Descent in Neural Networks as Sequential Learning in RKBS [63.011641517977644]
We construct an exact power-series representation of the neural network in a finite neighborhood of the initial weights.
We prove that, regardless of width, the training sequence produced by gradient descent can be exactly replicated by regularized sequential learning.
arXiv Detail & Related papers (2023-02-01T03:18:07Z) - A Solvable Model of Neural Scaling Laws [72.8349503901712]
Large language models with a huge number of parameters, when trained on near internet-sized number of tokens, have been empirically shown to obey neural scaling laws.
We propose a statistical model -- a joint generative data model and random feature model -- that captures this neural scaling phenomenology.
Key findings are the manner in which the power laws that occur in the statistics of natural datasets are extended by nonlinear random feature maps.
arXiv Detail & Related papers (2022-10-30T15:13:18Z) - Simple lessons from complex learning: what a neural network model learns
about cosmic structure formation [7.270598539996841]
We train a neural network model to predict the full phase space evolution of cosmological N-body simulations.
Our model achieves percent level accuracy at nonlinear scales of $ksim 1 mathrmMpc-1, h$, representing a significant improvement over COLA.
arXiv Detail & Related papers (2022-06-09T15:41:09Z) - On the Dynamics of Inference and Learning [0.0]
We present a treatment of this Bayesian updating process as a continuous dynamical system.
We show that when the Cram'er-Rao bound is saturated the learning rate is governed by a simple $1/T$ power-law.
arXiv Detail & Related papers (2022-04-19T18:04:36Z) - Emulating Spatio-Temporal Realizations of Three-Dimensional Isotropic
Turbulence via Deep Sequence Learning Models [24.025975236316842]
We use a data-driven approach to model a three-dimensional turbulent flow using cutting-edge Deep Learning techniques.
The accuracy of the model is assessed using statistical and physics-based metrics.
arXiv Detail & Related papers (2021-12-07T03:33:39Z) - Graph Convolutional Neural Networks for Body Force Prediction [0.0]
A graph based data-driven model is presented to perform inference on fields defined on an unstructured mesh.
The network can infer from field samples at different resolutions, and is invariant to the order in which the measurements within each sample are presented.
arXiv Detail & Related papers (2020-12-03T19:53:47Z) - Measuring Model Complexity of Neural Networks with Curve Activation
Functions [100.98319505253797]
We propose the linear approximation neural network (LANN) to approximate a given deep model with curve activation function.
We experimentally explore the training process of neural networks and detect overfitting.
We find that the $L1$ and $L2$ regularizations suppress the increase of model complexity.
arXiv Detail & Related papers (2020-06-16T07:38:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.