A machine learning framework for LES closure terms
- URL: http://arxiv.org/abs/2010.03030v1
- Date: Thu, 1 Oct 2020 08:42:37 GMT
- Title: A machine learning framework for LES closure terms
- Authors: Marius Kurz and Andrea Beck
- Abstract summary: We derive a consistent framework for LES closure models, with special emphasis laid upon the incorporation of implicit discretization-based filters and numerical approximation errors.
We compute the exact closure terms for the different LES filter functions from direct numerical simulation results of decaying homogeneous isotropic turbulence.
For the given application, the GRU architecture clearly outperforms the networks in terms of accuracy.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the present work, we explore the capability of artificial neural networks
(ANN) to predict the closure terms for large eddy simulations (LES) solely from
coarse-scale data. To this end, we derive a consistent framework for LES
closure models, with special emphasis laid upon the incorporation of implicit
discretization-based filters and numerical approximation errors. We investigate
implicit filter types, which are inspired by the solution representation of
discontinuous Galerkin and finite volume schemes and mimic the behaviour of the
discretization operator, and a global Fourier cutoff filter as a representative
of a typical explicit LES filter. Within the perfect LES framework, we compute
the exact closure terms for the different LES filter functions from direct
numerical simulation results of decaying homogeneous isotropic turbulence.
Multiple ANN with a multilayer perceptron (MLP) or a gated recurrent unit (GRU)
architecture are trained to predict the computed closure terms solely from
coarse-scale input data. For the given application, the GRU architecture
clearly outperforms the MLP networks in terms of accuracy, whilst reaching up
to 99.9% cross-correlation between the networks' predictions and the exact
closure terms for all considered filter functions. The GRU networks are also
shown to generalize well across different LES filters and resolutions. The
present study can thus be seen as a starting point for the investigation of
data-based modeling approaches for LES, which not only include the physical
closure terms, but account for the discretization effects in implicitly
filtered LES as well.
Related papers
- Normalising Flow-based Differentiable Particle Filters [19.09640071505051]
We present a differentiable particle filtering framework that uses (conditional) normalising flows to build its dynamic model, proposal distribution, and measurement model.
We derive the theoretical properties of the proposed filters and evaluate the proposed normalising flow-based differentiable particle filters' performance through a series of numerical experiments.
arXiv Detail & Related papers (2024-03-03T12:23:17Z) - Closed-form Filtering for Non-linear Systems [83.91296397912218]
We propose a new class of filters based on Gaussian PSD Models, which offer several advantages in terms of density approximation and computational efficiency.
We show that filtering can be efficiently performed in closed form when transitions and observations are Gaussian PSD Models.
Our proposed estimator enjoys strong theoretical guarantees, with estimation error that depends on the quality of the approximation and is adaptive to the regularity of the transition probabilities.
arXiv Detail & Related papers (2024-02-15T08:51:49Z) - Toward Discretization-Consistent Closure Schemes for Large Eddy
Simulation Using Reinforcement Learning [0.0]
This study proposes a novel method for developing discretization-consistent closure schemes for Large Eddy Simulation (LES)
The task of adapting the coefficients of LES closure models is framed as a Markov decision process and solved in an a posteriori manner with Reinforcement Learning (RL)
All newly derived models achieve accurate results that either match or outperform traditional models for different discretizations and resolutions.
arXiv Detail & Related papers (2023-09-12T14:20:12Z) - Learning Closed-form Equations for Subgrid-scale Closures from High-fidelity Data: Promises and Challenges [1.2582887633807602]
We learn closures from filtered numerical simulations of 2D turbulence and Rayleigh-B'enard convection.
We show that discovered closures are consistent with the leading term of the Taylor-series.
These findings are relevant to closure modeling of any multi-scale system.
arXiv Detail & Related papers (2023-06-08T08:07:54Z) - Convolutional Filtering on Sampled Manifolds [122.06927400759021]
We show that convolutional filtering on a sampled manifold converges to continuous manifold filtering.
Our findings are further demonstrated empirically on a problem of navigation control.
arXiv Detail & Related papers (2022-11-20T19:09:50Z) - Deep Reinforcement Learning for Turbulence Modeling in Large Eddy
Simulations [0.0]
In this work, we apply a reinforcement learning framework to find an optimal eddy-viscosity for implicitly filtered large eddy simulations.
We demonstrate that the trained models can provide long-term stable simulations and that they outperform established analytical models in terms of accuracy.
arXiv Detail & Related papers (2022-06-21T07:25:43Z) - Computational Doob's h-transforms for Online Filtering of Discretely
Observed Diffusions [65.74069050283998]
We propose a computational framework to approximate Doob's $h$-transforms.
The proposed approach can be orders of magnitude more efficient than state-of-the-art particle filters.
arXiv Detail & Related papers (2022-06-07T15:03:05Z) - On the Effective Number of Linear Regions in Shallow Univariate ReLU
Networks: Convergence Guarantees and Implicit Bias [50.84569563188485]
We show that gradient flow converges in direction when labels are determined by the sign of a target network with $r$ neurons.
Our result may already hold for mild over- parameterization, where the width is $tildemathcalO(r)$ and independent of the sample size.
arXiv Detail & Related papers (2022-05-18T16:57:10Z) - Deep Learning for the Benes Filter [91.3755431537592]
We present a new numerical method based on the mesh-free neural network representation of the density of the solution of the Benes model.
We discuss the role of nonlinearity in the filtering model equations for the choice of the domain of the neural network.
arXiv Detail & Related papers (2022-03-09T14:08:38Z) - Machine learning-based conditional mean filter: a generalization of the
ensemble Kalman filter for nonlinear data assimilation [42.60602838972598]
We propose a machine learning-based ensemble conditional mean filter (ML-EnCMF) for tracking possibly high-dimensional non-Gaussian state models with nonlinear dynamics based on sparse observations.
The proposed filtering method is developed based on the conditional expectation and numerically implemented using machine learning (ML) techniques combined with the ensemble method.
arXiv Detail & Related papers (2021-06-15T06:40:32Z) - Dependency Aware Filter Pruning [74.69495455411987]
Pruning a proportion of unimportant filters is an efficient way to mitigate the inference cost.
Previous work prunes filters according to their weight norms or the corresponding batch-norm scaling factors.
We propose a novel mechanism to dynamically control the sparsity-inducing regularization so as to achieve the desired sparsity.
arXiv Detail & Related papers (2020-05-06T07:41:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.