Accurate deep learning sub-grid scale models for large eddy simulations
- URL: http://arxiv.org/abs/2307.10060v1
- Date: Wed, 19 Jul 2023 15:30:06 GMT
- Title: Accurate deep learning sub-grid scale models for large eddy simulations
- Authors: Rikhi Bose and Arunabha M. Roy
- Abstract summary: We present two families of sub-grid scale (SGS) turbulence models developed for large-eddy simulation (LES) purposes.
Their development required the formulation of physics-informed robust and efficient Deep Learning (DL) algorithms.
Explicit filtering of data from direct simulations of canonical channel flow at two friction Reynolds numbers provided accurate data for training and testing.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present two families of sub-grid scale (SGS) turbulence models developed
for large-eddy simulation (LES) purposes. Their development required the
formulation of physics-informed robust and efficient Deep Learning (DL)
algorithms which, unlike state-of-the-art analytical modeling techniques can
produce high-order complex non-linear relations between inputs and outputs.
Explicit filtering of data from direct simulations of the canonical channel
flow at two friction Reynolds numbers $Re_\tau\approx 395$ and 590 provided
accurate data for training and testing. The two sets of models use different
network architectures. One of the architectures uses tensor basis neural
networks (TBNN) and embeds the simplified analytical model form of the general
effective-viscosity hypothesis, thus incorporating the Galilean, rotational and
reflectional invariances. The other architecture is that of a relatively simple
network, that is able to incorporate the Galilean invariance only. However,
this simpler architecture has better feature extraction capacity owing to its
ability to establish relations between and extract information from
cross-components of the integrity basis tensors and the SGS stresses. Both sets
of models are used to predict the SGS stresses for feature datasets generated
with different filter widths, and at different Reynolds numbers. It is shown
that due to the simpler model's better feature learning capabilities, it
outperforms the invariance embedded model in statistical performance metrics.
In a priori tests, both sets of models provide similar levels of dissipation
and backscatter. Based on the test results, both sets of models should be
usable in a posteriori actual LESs.
Related papers
- Latent Semantic Consensus For Deterministic Geometric Model Fitting [109.44565542031384]
We propose an effective method called Latent Semantic Consensus (LSC)
LSC formulates the model fitting problem into two latent semantic spaces based on data points and model hypotheses.
LSC is able to provide consistent and reliable solutions within only a few milliseconds for general multi-structural model fitting.
arXiv Detail & Related papers (2024-03-11T05:35:38Z) - Diffusion-Based Neural Network Weights Generation [80.89706112736353]
D2NWG is a diffusion-based neural network weights generation technique that efficiently produces high-performing weights for transfer learning.
Our method extends generative hyper-representation learning to recast the latent diffusion paradigm for neural network weights generation.
Our approach is scalable to large architectures such as large language models (LLMs), overcoming the limitations of current parameter generation techniques.
arXiv Detail & Related papers (2024-02-28T08:34:23Z) - The Languini Kitchen: Enabling Language Modelling Research at Different
Scales of Compute [66.84421705029624]
We introduce an experimental protocol that enables model comparisons based on equivalent compute, measured in accelerator hours.
We pre-process an existing large, diverse, and high-quality dataset of books that surpasses existing academic benchmarks in quality, diversity, and document length.
This work also provides two baseline models: a feed-forward model derived from the GPT-2 architecture and a recurrent model in the form of a novel LSTM with ten-fold throughput.
arXiv Detail & Related papers (2023-09-20T10:31:17Z) - A Deep Dive into the Connections Between the Renormalization Group and
Deep Learning in the Ising Model [0.0]
Renormalization group (RG) is an essential technique in statistical physics and quantum field theory.
We develop extensive renormalization techniques for the 1D and 2D Ising model to provide a baseline for comparison.
For the 2D Ising model, we successfully generated Ising model samples using the Wolff algorithm, and performed the group flow using a quasi-deterministic method.
arXiv Detail & Related papers (2023-08-21T22:50:54Z) - Learning Active Subspaces and Discovering Important Features with Gaussian Radial Basis Functions Neural Networks [0.0]
We show that precious information is contained in the spectrum of the precision matrix that can be extracted once the training of the model is completed.
We conducted numerical experiments for regression, classification, and feature selection tasks.
Our results demonstrate that the proposed model does not only yield an attractive prediction performance compared to the competitors.
arXiv Detail & Related papers (2023-07-11T09:54:30Z) - Differentiable Turbulence: Closure as a partial differential equation constrained optimization [1.8749305679160366]
We leverage the concept of differentiable turbulence, whereby an end-to-end differentiable solver is used in combination with physics-inspired choices of deep learning architectures.
We show that the differentiable physics paradigm is more successful than offline, textita-priori learning, and that hybrid solver-in-the-loop approaches to deep learning offer an ideal balance between computational efficiency, accuracy, and generalization.
arXiv Detail & Related papers (2023-07-07T15:51:55Z) - Git Re-Basin: Merging Models modulo Permutation Symmetries [3.5450828190071655]
We show how simple algorithms can be used to fit large networks in practice.
We demonstrate the first (to our knowledge) demonstration of zero mode connectivity between independently trained models.
We also discuss shortcomings in the linear mode connectivity hypothesis.
arXiv Detail & Related papers (2022-09-11T10:44:27Z) - Learning Large-scale Subsurface Simulations with a Hybrid Graph Network
Simulator [57.57321628587564]
We introduce Hybrid Graph Network Simulator (HGNS) for learning reservoir simulations of 3D subsurface fluid flows.
HGNS consists of a subsurface graph neural network (SGNN) to model the evolution of fluid flows, and a 3D-U-Net to model the evolution of pressure.
Using an industry-standard subsurface flow dataset (SPE-10) with 1.1 million cells, we demonstrate that HGNS is able to reduce the inference time up to 18 times compared to standard subsurface simulators.
arXiv Detail & Related papers (2022-06-15T17:29:57Z) - Dynamically-Scaled Deep Canonical Correlation Analysis [77.34726150561087]
Canonical Correlation Analysis (CCA) is a method for feature extraction of two views by finding maximally correlated linear projections of them.
We introduce a novel dynamic scaling method for training an input-dependent canonical correlation model.
arXiv Detail & Related papers (2022-03-23T12:52:49Z) - Deep transfer learning for improving single-EEG arousal detection [63.52264764099532]
Two datasets do not contain exactly the same setup leading to degraded performance in single-EEG models.
We train a baseline model and replace the first two layers to prepare the architecture for single-channel electroencephalography data.
Using a fine-tuning strategy, our model yields similar performance to the baseline model and was significantly better than a comparable single-channel model.
arXiv Detail & Related papers (2020-04-10T16:51:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.