Turbulence Scaling from Deep Learning Diffusion Generative Models
- URL: http://arxiv.org/abs/2311.06112v2
- Date: Fri, 5 Jul 2024 14:28:33 GMT
- Title: Turbulence Scaling from Deep Learning Diffusion Generative Models
- Authors: Tim Whittaker, Romuald A. Janik, Yaron Oz,
- Abstract summary: We employ a diffusion-based generative model to learn the distribution of turbulent vorticity profiles.
We generate snapshots of turbulent solutions to the incompressible Navier-Stokes equations.
All the learnt scaling exponents are consistent with the expected Kolmogorov scaling.
- Score: 0.8192907805418583
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Complex spatial and temporal structures are inherent characteristics of turbulent fluid flows and comprehending them poses a major challenge. This comprehesion necessitates an understanding of the space of turbulent fluid flow configurations. We employ a diffusion-based generative model to learn the distribution of turbulent vorticity profiles and generate snapshots of turbulent solutions to the incompressible Navier-Stokes equations. We consider the inverse cascade in two spatial dimensions and generate diverse turbulent solutions that differ from those in the training dataset. We analyze the statistical scaling properties of the new turbulent profiles, calculate their structure functions, energy power spectrum, velocity probability distribution function and moments of local energy dissipation. All the learnt scaling exponents are consistent with the expected Kolmogorov scaling. This agreement with established turbulence characteristics provides strong evidence of the model's capability to capture essential features of real-world turbulence.
Related papers
- Stochastic Reconstruction of Gappy Lagrangian Turbulent Signals by Conditional Diffusion Models [1.7810134788247751]
We present a method for reconstructing missing spatial and velocity data along the trajectories of small objects passively advected by turbulent flows.
Our approach makes use of conditional generative diffusion models, a recently proposed data-driven machine learning technique.
arXiv Detail & Related papers (2024-10-31T14:26:10Z) - Generative AI for fast and accurate Statistical Computation of Fluids [21.820160898966055]
We present a generative AI algorithm for addressing the challenging task of fast, accurate and robust statistical computation.
Our algorithm, termed as GenCFD, is based on a conditional score-based diffusion model.
arXiv Detail & Related papers (2024-09-27T00:26:18Z) - Single-snapshot machine learning for super-resolution of turbulence [0.0]
nonlinear machine-learning techniques can effectively extract physical insights from as little as a single snapshot of turbulent flow.
We show that a machine-learning model trained with flow tiles sampled from only a single snapshot can reconstruct vortical structures across a range of Reynolds numbers.
This work hopes to stop machine-learning practitioners from being wasteful with turbulent flow data.
arXiv Detail & Related papers (2024-09-07T22:13:26Z) - Unfolding Time: Generative Modeling for Turbulent Flows in 4D [49.843505326598596]
This work introduces a 4D generative diffusion model and a physics-informed guidance technique that enables the generation of realistic sequences of flow states.
Our findings indicate that the proposed method can successfully sample entire subsequences from the turbulent manifold.
This advancement opens doors for the application of generative modeling in analyzing the temporal evolution of turbulent flows.
arXiv Detail & Related papers (2024-06-17T10:21:01Z) - Predicting Cascading Failures with a Hyperparametric Diffusion Model [66.89499978864741]
We study cascading failures in power grids through the lens of diffusion models.
Our model integrates viral diffusion principles with physics-based concepts.
We show that this diffusion model can be learned from traces of cascading failures.
arXiv Detail & Related papers (2024-06-12T02:34:24Z) - Convolutional autoencoder for the spatiotemporal latent representation
of turbulence [5.8010446129208155]
We employ a three-dimensional multiscale convolutional autoencoder (CAE) to obtain latent representation of a turbulent flow.
We show that the Multiscale CAE is efficient, requiring less than 10% degrees of freedom than proper decomposition for compressing the data.
The proposed deep learning architecture opens opportunities for nonlinear reduced-order modeling of turbulent flows from data.
arXiv Detail & Related papers (2023-01-31T16:06:54Z) - DIFFormer: Scalable (Graph) Transformers Induced by Energy Constrained
Diffusion [66.21290235237808]
We introduce an energy constrained diffusion model which encodes a batch of instances from a dataset into evolutionary states.
We provide rigorous theory that implies closed-form optimal estimates for the pairwise diffusion strength among arbitrary instance pairs.
Experiments highlight the wide applicability of our model as a general-purpose encoder backbone with superior performance in various tasks.
arXiv Detail & Related papers (2023-01-23T15:18:54Z) - A Numerical Proof of Shell Model Turbulence Closure [41.94295877935867]
We present a closure, based on deep recurrent neural networks, that quantitatively reproduces, within statistical errors, Eulerian and Lagrangian structure functions and the intermittent statistics of the energy cascade.
Our results encourage the development of similar approaches for 3D Navier-Stokes turbulence.
arXiv Detail & Related papers (2022-02-18T16:31:57Z) - Flowformer: Linearizing Transformers with Conservation Flows [77.25101425464773]
We linearize Transformers free from specific inductive biases based on the flow network theory.
By respectively conserving the incoming flow of sinks for source competition and the outgoing flow of sources for sink allocation, Flow-Attention inherently generates informative attentions.
arXiv Detail & Related papers (2022-02-13T08:44:10Z) - Designing Air Flow with Surrogate-assisted Phenotypic Niching [117.44028458220427]
We introduce surrogate-assisted phenotypic niching, a quality diversity algorithm.
It allows to discover a large, diverse set of behaviors by using computationally expensive phenotypic features.
In this work we discover the types of air flow in a 2D fluid dynamics optimization problem.
arXiv Detail & Related papers (2021-05-10T10:45:28Z) - Focus of Attention Improves Information Transfer in Visual Features [80.22965663534556]
This paper focuses on unsupervised learning for transferring visual information in a truly online setting.
The computation of the entropy terms is carried out by a temporal process which yields online estimation of the entropy terms.
In order to better structure the input probability distribution, we use a human-like focus of attention model.
arXiv Detail & Related papers (2020-06-16T15:07:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.