Deep Reinforcement Learning for Turbulence Modeling in Large Eddy
Simulations
- URL: http://arxiv.org/abs/2206.11038v1
- Date: Tue, 21 Jun 2022 07:25:43 GMT
- Title: Deep Reinforcement Learning for Turbulence Modeling in Large Eddy
Simulations
- Authors: Marius Kurz, Philipp Offenh\"auser, Andrea Beck
- Abstract summary: In this work, we apply a reinforcement learning framework to find an optimal eddy-viscosity for implicitly filtered large eddy simulations.
We demonstrate that the trained models can provide long-term stable simulations and that they outperform established analytical models in terms of accuracy.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Over the last years, supervised learning (SL) has established itself as the
state-of-the-art for data-driven turbulence modeling. In the SL paradigm,
models are trained based on a dataset, which is typically computed a priori
from a high-fidelity solution by applying the respective filter function, which
separates the resolved and the unresolved flow scales. For implicitly filtered
large eddy simulation (LES), this approach is infeasible, since here, the
employed discretization itself acts as an implicit filter function. As a
consequence, the exact filter form is generally not known and thus, the
corresponding closure terms cannot be computed even if the full solution is
available. The reinforcement learning (RL) paradigm can be used to avoid this
inconsistency by training not on a previously obtained training dataset, but
instead by interacting directly with the dynamical LES environment itself. This
allows to incorporate the potentially complex implicit LES filter into the
training process by design. In this work, we apply a reinforcement learning
framework to find an optimal eddy-viscosity for implicitly filtered large eddy
simulations of forced homogeneous isotropic turbulence. For this, we formulate
the task of turbulence modeling as an RL task with a policy network based on
convolutional neural networks that adapts the eddy-viscosity in LES dynamically
in space and time based on the local flow state only. We demonstrate that the
trained models can provide long-term stable simulations and that they
outperform established analytical models in terms of accuracy. In addition, the
models generalize well to other resolutions and discretizations. We thus
demonstrate that RL can provide a framework for consistent, accurate and stable
turbulence modeling especially for implicitly filtered LES.
Related papers
- AutoTurb: Using Large Language Models for Automatic Algebraic Model Discovery of Turbulence Closure [15.905369652489505]
In this work, a novel framework using LLMs to automatically discover expressions for correcting the Reynolds stress model is proposed.
The proposed method is performed for separated flow over periodic hills at Re = 10,595.
It is demonstrated that the corrective RANS can improve the prediction for both the Reynolds stress and mean velocity fields.
arXiv Detail & Related papers (2024-10-14T16:06:35Z) - Uncertainty Representations in State-Space Layers for Deep Reinforcement Learning under Partial Observability [59.758009422067]
We propose a standalone Kalman filter layer that performs closed-form Gaussian inference in linear state-space models.
Similar to efficient linear recurrent layers, the Kalman filter layer processes sequential data using a parallel scan.
Experiments show that Kalman filter layers excel in problems where uncertainty reasoning is key for decision-making, outperforming other stateful models.
arXiv Detail & Related papers (2024-09-25T11:22:29Z) - A domain decomposition-based autoregressive deep learning model for unsteady and nonlinear partial differential equations [2.7755345520127936]
We propose a domain-decomposition-based deep learning (DL) framework, named CoMLSim, for accurately modeling unsteady and nonlinear partial differential equations (PDEs)
The framework consists of two key components: (a) a convolutional neural network (CNN)-based autoencoder architecture and (b) an autoregressive model composed of fully connected layers.
arXiv Detail & Related papers (2024-08-26T17:50:47Z) - Understanding Reinforcement Learning-Based Fine-Tuning of Diffusion Models: A Tutorial and Review [63.31328039424469]
This tutorial provides a comprehensive survey of methods for fine-tuning diffusion models to optimize downstream reward functions.
We explain the application of various RL algorithms, including PPO, differentiable optimization, reward-weighted MLE, value-weighted sampling, and path consistency learning.
arXiv Detail & Related papers (2024-07-18T17:35:32Z) - On the Trajectory Regularity of ODE-based Diffusion Sampling [79.17334230868693]
Diffusion-based generative models use differential equations to establish a smooth connection between a complex data distribution and a tractable prior distribution.
In this paper, we identify several intriguing trajectory properties in the ODE-based sampling process of diffusion models.
arXiv Detail & Related papers (2024-05-18T15:59:41Z) - A Priori Uncertainty Quantification of Reacting Turbulence Closure Models using Bayesian Neural Networks [0.0]
We employ Bayesian neural networks to capture uncertainties in a reacting flow model.
We demonstrate that BNN models can provide unique insights about the structure of uncertainty of the data-driven closure models.
The efficacy of the model is demonstrated by a priori evaluation on a dataset consisting of a variety of flame conditions and fuels.
arXiv Detail & Related papers (2024-02-28T22:19:55Z) - Towards Continual Learning Desiderata via HSIC-Bottleneck
Orthogonalization and Equiangular Embedding [55.107555305760954]
We propose a conceptually simple yet effective method that attributes forgetting to layer-wise parameter overwriting and the resulting decision boundary distortion.
Our method achieves competitive accuracy performance, even with absolute superiority of zero exemplar buffer and 1.02x the base model.
arXiv Detail & Related papers (2024-01-17T09:01:29Z) - Toward Discretization-Consistent Closure Schemes for Large Eddy
Simulation Using Reinforcement Learning [0.0]
This study proposes a novel method for developing discretization-consistent closure schemes for Large Eddy Simulation (LES)
The task of adapting the coefficients of LES closure models is framed as a Markov decision process and solved in an a posteriori manner with Reinforcement Learning (RL)
All newly derived models achieve accurate results that either match or outperform traditional models for different discretizations and resolutions.
arXiv Detail & Related papers (2023-09-12T14:20:12Z) - Accurate deep learning sub-grid scale models for large eddy simulations [0.0]
We present two families of sub-grid scale (SGS) turbulence models developed for large-eddy simulation (LES) purposes.
Their development required the formulation of physics-informed robust and efficient Deep Learning (DL) algorithms.
Explicit filtering of data from direct simulations of canonical channel flow at two friction Reynolds numbers provided accurate data for training and testing.
arXiv Detail & Related papers (2023-07-19T15:30:06Z) - Simplifying Model-based RL: Learning Representations, Latent-space
Models, and Policies with One Objective [142.36200080384145]
We propose a single objective which jointly optimize a latent-space model and policy to achieve high returns while remaining self-consistent.
We demonstrate that the resulting algorithm matches or improves the sample-efficiency of the best prior model-based and model-free RL methods.
arXiv Detail & Related papers (2022-09-18T03:51:58Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.