Toward Discretization-Consistent Closure Schemes for Large Eddy
Simulation Using Reinforcement Learning
- URL: http://arxiv.org/abs/2309.06260v2
- Date: Wed, 13 Dec 2023 11:05:25 GMT
- Title: Toward Discretization-Consistent Closure Schemes for Large Eddy
Simulation Using Reinforcement Learning
- Authors: Andrea Beck and Marius Kurz
- Abstract summary: This study proposes a novel method for developing discretization-consistent closure schemes for Large Eddy Simulation (LES)
The task of adapting the coefficients of LES closure models is framed as a Markov decision process and solved in an a posteriori manner with Reinforcement Learning (RL)
All newly derived models achieve accurate results that either match or outperform traditional models for different discretizations and resolutions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study proposes a novel method for developing discretization-consistent
closure schemes for implicitly filtered Large Eddy Simulation (LES). Here, the
induced filter kernel, and thus the closure terms, are determined by the
properties of the grid and the discretization operator, leading to additional
computational subgrid terms that are generally unknown in a priori analysis. In
this work, the task of adapting the coefficients of LES closure models is thus
framed as a Markov decision process and solved in an a posteriori manner with
Reinforcement Learning (RL). This optimization framework is applied to both
explicit and implicit closure models. The explicit model is based on an
element-local eddy viscosity model. The optimized model is found to adapt its
induced viscosity within discontinuous Galerkin (DG) methods to homogenize the
dissipation within an element by adding more viscosity near its center. For the
implicit modeling, RL is applied to identify an optimal blending strategy for a
hybrid DG and Finite Volume (FV) scheme. The resulting optimized discretization
yields more accurate results in LES than either the pure DG or FV method and
renders itself as a viable modeling ansatz that could initiate a novel class of
high-order schemes for compressible turbulence by combining turbulence modeling
with shock capturing in a single framework. All newly derived models achieve
accurate results that either match or outperform traditional models for
different discretizations and resolutions. Overall, the results demonstrate
that the proposed RL optimization can provide discretization-consistent
closures that could reduce the uncertainty in implicitly filtered LES.
Related papers
- Adjoint Matching: Fine-tuning Flow and Diffusion Generative Models with Memoryless Stochastic Optimal Control [26.195547996552406]
We cast reward fine-tuning as optimal control (SOC) for dynamical generative models that produce samples through an iterative process.
We find that our approach significantly improves over existing methods for reward fine-tuning, achieving better consistency, realism, and generalization to unseen human preference reward models.
arXiv Detail & Related papers (2024-09-13T14:22:14Z) - Understanding Reinforcement Learning-Based Fine-Tuning of Diffusion Models: A Tutorial and Review [63.31328039424469]
This tutorial provides a comprehensive survey of methods for fine-tuning diffusion models to optimize downstream reward functions.
We explain the application of various RL algorithms, including PPO, differentiable optimization, reward-weighted MLE, value-weighted sampling, and path consistency learning.
arXiv Detail & Related papers (2024-07-18T17:35:32Z) - Closed-form Filtering for Non-linear Systems [83.91296397912218]
We propose a new class of filters based on Gaussian PSD Models, which offer several advantages in terms of density approximation and computational efficiency.
We show that filtering can be efficiently performed in closed form when transitions and observations are Gaussian PSD Models.
Our proposed estimator enjoys strong theoretical guarantees, with estimation error that depends on the quality of the approximation and is adaptive to the regularity of the transition probabilities.
arXiv Detail & Related papers (2024-02-15T08:51:49Z) - Multi-Response Heteroscedastic Gaussian Process Models and Their
Inference [1.52292571922932]
We propose a novel framework for the modeling of heteroscedastic covariance functions.
We employ variational inference to approximate the posterior and facilitate posterior predictive modeling.
We show that our proposed framework offers a robust and versatile tool for a wide array of applications.
arXiv Detail & Related papers (2023-08-29T15:06:47Z) - On stable wrapper-based parameter selection method for efficient
ANN-based data-driven modeling of turbulent flows [2.0731505001992323]
This study aims to analyze and develop a reduced modeling approach based on artificial neural network (ANN) and wrapper methods.
It is found that the gradient-based subset selection to minimize the total derivative loss results in improved consistency-over-trials.
For the reduced turbulent Prandtl number model, the gradient-based subset selection improves the prediction in the validation case over the other methods.
arXiv Detail & Related papers (2023-08-04T08:26:56Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Protein Design with Guided Discrete Diffusion [67.06148688398677]
A popular approach to protein design is to combine a generative model with a discriminative model for conditional sampling.
We propose diffusioN Optimized Sampling (NOS), a guidance method for discrete diffusion models.
NOS makes it possible to perform design directly in sequence space, circumventing significant limitations of structure-based methods.
arXiv Detail & Related papers (2023-05-31T16:31:24Z) - Score-based Continuous-time Discrete Diffusion Models [102.65769839899315]
We extend diffusion models to discrete variables by introducing a Markov jump process where the reverse process denoises via a continuous-time Markov chain.
We show that an unbiased estimator can be obtained via simple matching the conditional marginal distributions.
We demonstrate the effectiveness of the proposed method on a set of synthetic and real-world music and image benchmarks.
arXiv Detail & Related papers (2022-11-30T05:33:29Z) - When to Update Your Model: Constrained Model-based Reinforcement
Learning [50.74369835934703]
We propose a novel and general theoretical scheme for a non-decreasing performance guarantee of model-based RL (MBRL)
Our follow-up derived bounds reveal the relationship between model shifts and performance improvement.
A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns.
arXiv Detail & Related papers (2022-10-15T17:57:43Z) - Deep Reinforcement Learning for Turbulence Modeling in Large Eddy
Simulations [0.0]
In this work, we apply a reinforcement learning framework to find an optimal eddy-viscosity for implicitly filtered large eddy simulations.
We demonstrate that the trained models can provide long-term stable simulations and that they outperform established analytical models in terms of accuracy.
arXiv Detail & Related papers (2022-06-21T07:25:43Z) - A machine learning framework for LES closure terms [0.0]
We derive a consistent framework for LES closure models, with special emphasis laid upon the incorporation of implicit discretization-based filters and numerical approximation errors.
We compute the exact closure terms for the different LES filter functions from direct numerical simulation results of decaying homogeneous isotropic turbulence.
For the given application, the GRU architecture clearly outperforms the networks in terms of accuracy.
arXiv Detail & Related papers (2020-10-01T08:42:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.