Learned Turbulence Modelling with Differentiable Fluid Solvers
- URL: http://arxiv.org/abs/2202.06988v1
- Date: Mon, 14 Feb 2022 19:03:01 GMT
- Title: Learned Turbulence Modelling with Differentiable Fluid Solvers
- Authors: Bj\"orn List, Li-Wei Chen and Nils Thuerey
- Abstract summary: We train turbulence models based on convolutional neural networks.
These models improve under-resolved low resolution solutions to the incompressible Navier-Stokes equations at simulation time.
- Score: 23.535052848123932
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we train turbulence models based on convolutional neural
networks. These learned turbulence models improve under-resolved low resolution
solutions to the incompressible Navier-Stokes equations at simulation time. Our
method involves the development of a differentiable numerical solver that
supports the propagation of optimisation gradients through multiple solver
steps. We showcase the significance of this property by demonstrating the
superior stability and accuracy of those models that featured a higher number
of unrolled steps during training. This approach is applied to three
two-dimensional turbulence flow scenarios, a homogeneous decaying turbulence
case, a temporally evolving mixing layer and a spatially evolving mixing layer.
Our method achieves significant improvements of long-term \textit{a-posteriori}
statistics when compared to no-model simulations, without requiring these
statistics to be directly included in the learning targets. At inference time,
our proposed method also gains substantial performance improvements over
similarly accurate, purely numerical methods.
Related papers
- Adaptive Non-Uniform Timestep Sampling for Diffusion Model Training [4.760537994346813]
As data distributions grow more complex, training diffusion models to convergence becomes increasingly intensive.
We introduce a non-uniform timestep sampling method that prioritizes these more critical timesteps.
Our method shows robust performance across various datasets, scheduling strategies, and diffusion architectures.
arXiv Detail & Related papers (2024-11-15T07:12:18Z) - Trajectory Flow Matching with Applications to Clinical Time Series Modeling [77.58277281319253]
Trajectory Flow Matching (TFM) trains a Neural SDE in a simulation-free manner, bypassing backpropagation through the dynamics.
We demonstrate improved performance on three clinical time series datasets in terms of absolute performance and uncertainty prediction.
arXiv Detail & Related papers (2024-10-28T15:54:50Z) - Provable Statistical Rates for Consistency Diffusion Models [87.28777947976573]
Despite the state-of-the-art performance, diffusion models are known for their slow sample generation due to the extensive number of steps involved.
This paper contributes towards the first statistical theory for consistency models, formulating their training as a distribution discrepancy minimization problem.
arXiv Detail & Related papers (2024-06-23T20:34:18Z) - Towards Learning Stochastic Population Models by Gradient Descent [0.0]
We show that simultaneous estimation of parameters and structure poses major challenges for optimization procedures.
We demonstrate accurate estimation of models but find that enforcing the inference of parsimonious, interpretable models drastically increases the difficulty.
arXiv Detail & Related papers (2024-04-10T14:38:58Z) - Adaptive Federated Learning Over the Air [108.62635460744109]
We propose a federated version of adaptive gradient methods, particularly AdaGrad and Adam, within the framework of over-the-air model training.
Our analysis shows that the AdaGrad-based training algorithm converges to a stationary point at the rate of $mathcalO( ln(T) / T 1 - frac1alpha ).
arXiv Detail & Related papers (2024-03-11T09:10:37Z) - On stable wrapper-based parameter selection method for efficient
ANN-based data-driven modeling of turbulent flows [2.0731505001992323]
This study aims to analyze and develop a reduced modeling approach based on artificial neural network (ANN) and wrapper methods.
It is found that the gradient-based subset selection to minimize the total derivative loss results in improved consistency-over-trials.
For the reduced turbulent Prandtl number model, the gradient-based subset selection improves the prediction in the validation case over the other methods.
arXiv Detail & Related papers (2023-08-04T08:26:56Z) - A Geometric Perspective on Diffusion Models [57.27857591493788]
We inspect the ODE-based sampling of a popular variance-exploding SDE.
We establish a theoretical relationship between the optimal ODE-based sampling and the classic mean-shift (mode-seeking) algorithm.
arXiv Detail & Related papers (2023-05-31T15:33:16Z) - Forecasting through deep learning and modal decomposition in two-phase
concentric jets [2.362412515574206]
This work aims to improve fuel chamber injectors' performance in turbofan engines.
It requires the development of models that allow real-time prediction and improvement of the fuel/air mixture.
arXiv Detail & Related papers (2022-12-24T12:59:41Z) - A unified method of data assimilation and turbulence modeling for
separated flows at high Reynolds numbers [0.0]
In this paper, we propose an improved ensemble kalman inversion method as a unified approach of data assimilation and turbulence modeling.
The trainable parameters of the DNN are optimized according to the given experimental surface pressure coefficients.
The results show that through joint assimilation of vary few experimental states, we can get turbulence models generalizing well to both attached and separated flows.
arXiv Detail & Related papers (2022-11-01T17:17:53Z) - Deep Equilibrium Optical Flow Estimation [80.80992684796566]
Recent state-of-the-art (SOTA) optical flow models use finite-step recurrent update operations to emulate traditional algorithms.
These RNNs impose large computation and memory overheads, and are not directly trained to model such stable estimation.
We propose deep equilibrium (DEQ) flow estimators, an approach that directly solves for the flow as the infinite-level fixed point of an implicit layer.
arXiv Detail & Related papers (2022-04-18T17:53:44Z) - A Near-Optimal Gradient Flow for Learning Neural Energy-Based Models [93.24030378630175]
We propose a novel numerical scheme to optimize the gradient flows for learning energy-based models (EBMs)
We derive a second-order Wasserstein gradient flow of the global relative entropy from Fokker-Planck equation.
Compared with existing schemes, Wasserstein gradient flow is a smoother and near-optimal numerical scheme to approximate real data densities.
arXiv Detail & Related papers (2019-10-31T02:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.