Dynamic Deep Learning LES Closures: Online Optimization With Embedded
DNS
- URL: http://arxiv.org/abs/2303.02338v1
- Date: Sat, 4 Mar 2023 06:20:47 GMT
- Title: Dynamic Deep Learning LES Closures: Online Optimization With Embedded
DNS
- Authors: Justin Sirignano and Jonathan F. MacArt
- Abstract summary: We develop a new online training method for deep learning closure models in large-eddy simulation (LES)
Deep learning closure model is dynamically trained during LES calculation using embedded direct numerical simulation (DNS) data.
An online optimization algorithm is developed to dynamically train the deep learning closure model in the coupled, LES-embedded DNS calculation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning (DL) has recently emerged as a candidate for closure modeling
of large-eddy simulation (LES) of turbulent flows. High-fidelity training data
is typically limited: it is computationally costly (or even impossible) to
numerically generate at high Reynolds numbers, while experimental data is also
expensive to produce and might only include sparse/aggregate flow measurements.
Thus, only a relatively small number of geometries and physical regimes will
realistically be included in any training dataset. Limited data can lead to
overfitting and therefore inaccurate predictions for geometries and physical
regimes that are different from the training cases. We develop a new online
training method for deep learning closure models in LES which seeks to address
this challenge. The deep learning closure model is dynamically trained during a
large-eddy simulation (LES) calculation using embedded direct numerical
simulation (DNS) data. That is, in a small subset of the domain, the flow is
computed at DNS resolutions in concert with the LES prediction. The closure
model then adjusts its approximation to the unclosed terms using data from the
embedded DNS. Consequently, the closure model is trained on data from the exact
geometry/physical regime of the prediction at hand. An online optimization
algorithm is developed to dynamically train the deep learning closure model in
the coupled, LES-embedded DNS calculation.
Related papers
- Graph Neural Networks and Differential Equations: A hybrid approach for data assimilation of fluid flows [0.0]
This study presents a novel hybrid approach that combines Graph Neural Networks (GNNs) with Reynolds-Averaged Navier Stokes (RANS) equations.
The results demonstrate significant improvements in the accuracy of the reconstructed mean flow compared to purely data-driven models.
arXiv Detail & Related papers (2024-11-14T14:31:52Z) - Computation-Aware Gaussian Processes: Model Selection And Linear-Time Inference [55.150117654242706]
We show that model selection for computation-aware GPs trained on 1.8 million data points can be done within a few hours on a single GPU.
As a result of this work, Gaussian processes can be trained on large-scale datasets without significantly compromising their ability to quantify uncertainty.
arXiv Detail & Related papers (2024-11-01T21:11:48Z) - Data-Augmented Predictive Deep Neural Network: Enhancing the extrapolation capabilities of non-intrusive surrogate models [0.5735035463793009]
We propose a new deep learning framework, where kernel dynamic mode decomposition (KDMD) is employed to evolve the dynamics of the latent space generated by the encoder part of a convolutional autoencoder (CAE)
After adding the KDMD-decoder-extrapolated data into the original data set, we train the CAE along with a feed-forward deep neural network using the augmented data.
The trained network can predict future states outside the training time interval at any out-of-training parameter samples.
arXiv Detail & Related papers (2024-10-17T09:26:14Z) - Diffusion-Model-Assisted Supervised Learning of Generative Models for
Density Estimation [10.793646707711442]
We present a framework for training generative models for density estimation.
We use the score-based diffusion model to generate labeled data.
Once the labeled data are generated, we can train a simple fully connected neural network to learn the generative model in the supervised manner.
arXiv Detail & Related papers (2023-10-22T23:56:19Z) - Training Deep Surrogate Models with Large Scale Online Learning [48.7576911714538]
Deep learning algorithms have emerged as a viable alternative for obtaining fast solutions for PDEs.
Models are usually trained on synthetic data generated by solvers, stored on disk and read back for training.
It proposes an open source online training framework for deep surrogate models.
arXiv Detail & Related papers (2023-06-28T12:02:27Z) - An advanced spatio-temporal convolutional recurrent neural network for
storm surge predictions [73.4962254843935]
We study the capability of artificial neural network models to emulate storm surge based on the storm track/size/intensity history.
This study presents a neural network model that can predict storm surge, informed by a database of synthetic storm simulations.
arXiv Detail & Related papers (2022-04-18T23:42:18Z) - Reconstructing High-resolution Turbulent Flows Using Physics-Guided
Neural Networks [3.9548535445908928]
Direct numerical simulation (DNS) of turbulent flows is computationally expensive and cannot be applied to flows with large Reynolds numbers.
Large eddy simulation (LES) is an alternative that is computationally less demanding, but is unable to capture all of the scales of turbulent transport accurately.
We build a new data-driven methodology based on super-resolution techniques to reconstruct DNS data from LES predictions.
arXiv Detail & Related papers (2021-09-06T03:01:24Z) - AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural
Networks [78.62086125399831]
We present a general approach called Alternating Compressed/DeCompressed (AC/DC) training of deep neural networks (DNNs)
AC/DC outperforms existing sparse training methods in accuracy at similar computational budgets.
An important property of AC/DC is that it allows co-training of dense and sparse models, yielding accurate sparse-dense model pairs at the end of the training process.
arXiv Detail & Related papers (2021-06-23T13:23:00Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z) - Real-Time Regression with Dividing Local Gaussian Processes [62.01822866877782]
Local Gaussian processes are a novel, computationally efficient modeling approach based on Gaussian process regression.
Due to an iterative, data-driven division of the input space, they achieve a sublinear computational complexity in the total number of training points in practice.
A numerical evaluation on real-world data sets shows their advantages over other state-of-the-art methods in terms of accuracy as well as prediction and update speed.
arXiv Detail & Related papers (2020-06-16T18:43:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.