Towards extraction of orthogonal and parsimonious non-linear modes from
turbulent flows
- URL: http://arxiv.org/abs/2109.01514v1
- Date: Fri, 3 Sep 2021 13:38:51 GMT
- Title: Towards extraction of orthogonal and parsimonious non-linear modes from
turbulent flows
- Authors: Hamidreza Eivazi, Soledad Le Clainche, Sergio Hoyas, Ricardo Vinuesa
- Abstract summary: We propose a deep probabilistic-neural-network architecture for learning a minimal and near-orthogonal set of non-linear modes.
Our approach is based on $beta$-variational autoencoders ($beta$-VAEs) and convolutional neural networks (CNNs)
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a deep probabilistic-neural-network architecture for learning a
minimal and near-orthogonal set of non-linear modes from high-fidelity
turbulent-flow-field data useful for flow analysis, reduced-order modeling, and
flow control. Our approach is based on $\beta$-variational autoencoders
($\beta$-VAEs) and convolutional neural networks (CNNs), which allow us to
extract non-linear modes from multi-scale turbulent flows while encouraging the
learning of independent latent variables and penalizing the size of the latent
vector. Moreover, we introduce an algorithm for ordering VAE-based modes with
respect to their contribution to the reconstruction. We apply this method for
non-linear mode decomposition of the turbulent flow through a simplified urban
environment, where the flow-field data is obtained based on well-resolved
large-eddy simulations (LESs). We demonstrate that by constraining the shape of
the latent space, it is possible to motivate the orthogonality and extract a
set of parsimonious modes sufficient for high-quality reconstruction. Our
results show the excellent performance of the method in the reconstruction
against linear-theory-based decompositions. Moreover, we compare our method
with available AE-based models. We show the ability of our approach in the
extraction of near-orthogonal modes that may lead to interpretability.
Related papers
- On the Trajectory Regularity of ODE-based Diffusion Sampling [79.17334230868693]
Diffusion-based generative models use differential equations to establish a smooth connection between a complex data distribution and a tractable prior distribution.
In this paper, we identify several intriguing trajectory properties in the ODE-based sampling process of diffusion models.
arXiv Detail & Related papers (2024-05-18T15:59:41Z) - Koopman-Based Surrogate Modelling of Turbulent Rayleigh-BĂ©nard Convection [4.248022697109535]
We use a Koopman-inspired architecture called the Linear Recurrent Autoencoder Network (LRAN) for learning reduced-order dynamics in convection flows.
A traditional fluid dynamics method, the Kernel Dynamic Mode Decomposition (KDMD) is used to compare the LRAN.
We obtained more accurate predictions with the LRAN than with KDMD in the most turbulent setting.
arXiv Detail & Related papers (2024-05-10T12:15:02Z) - The Convex Landscape of Neural Networks: Characterizing Global Optima
and Stationary Points via Lasso Models [75.33431791218302]
Deep Neural Network Network (DNN) models are used for programming purposes.
In this paper we examine the use of convex neural recovery models.
We show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
We also show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
arXiv Detail & Related papers (2023-12-19T23:04:56Z) - Low-rank extended Kalman filtering for online learning of neural
networks from streaming data [71.97861600347959]
We propose an efficient online approximate Bayesian inference algorithm for estimating the parameters of a nonlinear function from a potentially non-stationary data stream.
The method is based on the extended Kalman filter (EKF), but uses a novel low-rank plus diagonal decomposition of the posterior matrix.
In contrast to methods based on variational inference, our method is fully deterministic, and does not require step-size tuning.
arXiv Detail & Related papers (2023-05-31T03:48:49Z) - Reflected Diffusion Models [93.26107023470979]
We present Reflected Diffusion Models, which reverse a reflected differential equation evolving on the support of the data.
Our approach learns the score function through a generalized score matching loss and extends key components of standard diffusion models.
arXiv Detail & Related papers (2023-04-10T17:54:38Z) - Nonlinear Isometric Manifold Learning for Injective Normalizing Flows [58.720142291102135]
We use isometries to separate manifold learning and density estimation.
We also employ autoencoders to design embeddings with explicit inverses that do not distort the probability distribution.
arXiv Detail & Related papers (2022-03-08T08:57:43Z) - On the application of generative adversarial networks for nonlinear
modal analysis [0.0]
A machine learning scheme is proposed with a view to performing nonlinear modal analysis.
The scheme is focussed on defining a one-to-one mapping from a latent modal' space to the natural coordinate space.
The mapping is achieved via the use of the recently-developed cycle-consistent generative adversarial network (cycle-GAN) and an assembly of neural networks.
arXiv Detail & Related papers (2022-03-02T16:46:41Z) - Non-linear manifold ROM with Convolutional Autoencoders and Reduced
Over-Collocation method [0.0]
Non-affine parametric dependencies, nonlinearities and advection-dominated regimes of the model of interest can result in a slow Kolmogorov n-width decay.
We implement the non-linear manifold method introduced by Carlberg et al [37] with hyper-reduction achieved through reduced over-collocation and teacher-student training of a reduced decoder.
We test the methodology on a 2d non-linear conservation law and a 2d shallow water models, and compare the results obtained with a purely data-driven method for which the dynamics is evolved in time with a long-short term memory network
arXiv Detail & Related papers (2022-03-01T11:16:50Z) - An Ode to an ODE [78.97367880223254]
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the group O(d)
This nested system of two flows provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem.
arXiv Detail & Related papers (2020-06-19T22:05:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.