Deep Reinforcement Learning for Turbulence Modeling in Large Eddy
Simulations
- URL: http://arxiv.org/abs/2206.11038v1
- Date: Tue, 21 Jun 2022 07:25:43 GMT
- Title: Deep Reinforcement Learning for Turbulence Modeling in Large Eddy
Simulations
- Authors: Marius Kurz, Philipp Offenh\"auser, Andrea Beck
- Abstract summary: In this work, we apply a reinforcement learning framework to find an optimal eddy-viscosity for implicitly filtered large eddy simulations.
We demonstrate that the trained models can provide long-term stable simulations and that they outperform established analytical models in terms of accuracy.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Over the last years, supervised learning (SL) has established itself as the
state-of-the-art for data-driven turbulence modeling. In the SL paradigm,
models are trained based on a dataset, which is typically computed a priori
from a high-fidelity solution by applying the respective filter function, which
separates the resolved and the unresolved flow scales. For implicitly filtered
large eddy simulation (LES), this approach is infeasible, since here, the
employed discretization itself acts as an implicit filter function. As a
consequence, the exact filter form is generally not known and thus, the
corresponding closure terms cannot be computed even if the full solution is
available. The reinforcement learning (RL) paradigm can be used to avoid this
inconsistency by training not on a previously obtained training dataset, but
instead by interacting directly with the dynamical LES environment itself. This
allows to incorporate the potentially complex implicit LES filter into the
training process by design. In this work, we apply a reinforcement learning
framework to find an optimal eddy-viscosity for implicitly filtered large eddy
simulations of forced homogeneous isotropic turbulence. For this, we formulate
the task of turbulence modeling as an RL task with a policy network based on
convolutional neural networks that adapts the eddy-viscosity in LES dynamically
in space and time based on the local flow state only. We demonstrate that the
trained models can provide long-term stable simulations and that they
outperform established analytical models in terms of accuracy. In addition, the
models generalize well to other resolutions and discretizations. We thus
demonstrate that RL can provide a framework for consistent, accurate and stable
turbulence modeling especially for implicitly filtered LES.
Related papers
- Understanding Reinforcement Learning-Based Fine-Tuning of Diffusion Models: A Tutorial and Review [63.31328039424469]
This tutorial provides a comprehensive survey of methods for fine-tuning diffusion models to optimize downstream reward functions.
We explain the application of various RL algorithms, including PPO, differentiable optimization, reward-weighted MLE, value-weighted sampling, and path consistency learning.
arXiv Detail & Related papers (2024-07-18T17:35:32Z) - On the Trajectory Regularity of ODE-based Diffusion Sampling [79.17334230868693]
Diffusion-based generative models use differential equations to establish a smooth connection between a complex data distribution and a tractable prior distribution.
In this paper, we identify several intriguing trajectory properties in the ODE-based sampling process of diffusion models.
arXiv Detail & Related papers (2024-05-18T15:59:41Z) - A Priori Uncertainty Quantification of Reacting Turbulence Closure Models using Bayesian Neural Networks [0.0]
We employ Bayesian neural networks to capture uncertainties in a reacting flow model.
We demonstrate that BNN models can provide unique insights about the structure of uncertainty of the data-driven closure models.
The efficacy of the model is demonstrated by a priori evaluation on a dataset consisting of a variety of flame conditions and fuels.
arXiv Detail & Related papers (2024-02-28T22:19:55Z) - Towards Continual Learning Desiderata via HSIC-Bottleneck
Orthogonalization and Equiangular Embedding [55.107555305760954]
We propose a conceptually simple yet effective method that attributes forgetting to layer-wise parameter overwriting and the resulting decision boundary distortion.
Our method achieves competitive accuracy performance, even with absolute superiority of zero exemplar buffer and 1.02x the base model.
arXiv Detail & Related papers (2024-01-17T09:01:29Z) - Generative Modeling of Regular and Irregular Time Series Data via Koopman VAEs [50.25683648762602]
We introduce Koopman VAE, a new generative framework that is based on a novel design for the model prior.
Inspired by Koopman theory, we represent the latent conditional prior dynamics using a linear map.
KoVAE outperforms state-of-the-art GAN and VAE methods across several challenging synthetic and real-world time series generation benchmarks.
arXiv Detail & Related papers (2023-10-04T07:14:43Z) - Toward Discretization-Consistent Closure Schemes for Large Eddy
Simulation Using Reinforcement Learning [0.0]
This study proposes a novel method for developing discretization-consistent closure schemes for Large Eddy Simulation (LES)
The task of adapting the coefficients of LES closure models is framed as a Markov decision process and solved in an a posteriori manner with Reinforcement Learning (RL)
All newly derived models achieve accurate results that either match or outperform traditional models for different discretizations and resolutions.
arXiv Detail & Related papers (2023-09-12T14:20:12Z) - Accurate deep learning sub-grid scale models for large eddy simulations [0.0]
We present two families of sub-grid scale (SGS) turbulence models developed for large-eddy simulation (LES) purposes.
Their development required the formulation of physics-informed robust and efficient Deep Learning (DL) algorithms.
Explicit filtering of data from direct simulations of canonical channel flow at two friction Reynolds numbers provided accurate data for training and testing.
arXiv Detail & Related papers (2023-07-19T15:30:06Z) - Simplifying Model-based RL: Learning Representations, Latent-space
Models, and Policies with One Objective [142.36200080384145]
We propose a single objective which jointly optimize a latent-space model and policy to achieve high returns while remaining self-consistent.
We demonstrate that the resulting algorithm matches or improves the sample-efficiency of the best prior model-based and model-free RL methods.
arXiv Detail & Related papers (2022-09-18T03:51:58Z) - A data-driven peridynamic continuum model for upscaling molecular
dynamics [3.1196544696082613]
We propose a learning framework to extract, from molecular dynamics data, an optimal Linear Peridynamic Solid model.
We provide sufficient well-posedness conditions for discretized LPS models with sign-changing influence functions.
This framework guarantees that the resulting model is mathematically well-posed, physically consistent, and that it generalizes well to settings that are different from the ones used during training.
arXiv Detail & Related papers (2021-08-04T07:07:47Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Model-based Policy Optimization with Unsupervised Model Adaptation [37.09948645461043]
We investigate how to bridge the gap between real and simulated data due to inaccurate model estimation for better policy optimization.
We propose a novel model-based reinforcement learning framework AMPO, which introduces unsupervised model adaptation.
Our approach achieves state-of-the-art performance in terms of sample efficiency on a range of continuous control benchmark tasks.
arXiv Detail & Related papers (2020-10-19T14:19:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.