On stable wrapper-based parameter selection method for efficient
ANN-based data-driven modeling of turbulent flows
- URL: http://arxiv.org/abs/2308.02602v1
- Date: Fri, 4 Aug 2023 08:26:56 GMT
- Title: On stable wrapper-based parameter selection method for efficient
ANN-based data-driven modeling of turbulent flows
- Authors: Hyeongeun Yun, Yongcheol Choi, Youngjae Kim, and Seongwon Kang
- Abstract summary: This study aims to analyze and develop a reduced modeling approach based on artificial neural network (ANN) and wrapper methods.
It is found that the gradient-based subset selection to minimize the total derivative loss results in improved consistency-over-trials.
For the reduced turbulent Prandtl number model, the gradient-based subset selection improves the prediction in the validation case over the other methods.
- Score: 2.0731505001992323
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To model complex turbulent flow and heat transfer phenomena, this study aims
to analyze and develop a reduced modeling approach based on artificial neural
network (ANN) and wrapper methods. This approach has an advantage over other
methods such as the correlation-based filter method in terms of removing
redundant or irrelevant parameters even under non-linearity among them. As a
downside, the overfitting and randomness of ANN training may produce
inconsistent subsets over selection trials especially in a higher physical
dimension. This study analyzes a few existing ANN-based wrapper methods and
develops a revised one based on the gradient-based subset selection indices to
minimize the loss in the total derivative or the directional consistency at
each elimination step. To examine parameter reduction performance and
consistency-over-trials, we apply these methods to a manufactured subset
selection problem, modeling of the bubble size in a turbulent bubbly flow, and
modeling of the spatially varying turbulent Prandtl number in a duct flow. It
is found that the gradient-based subset selection to minimize the total
derivative loss results in improved consistency-over-trials compared to the
other ANN-based wrapper methods, while removing unnecessary parameters
successfully. For the reduced turbulent Prandtl number model, the
gradient-based subset selection improves the prediction in the validation case
over the other methods. Also, the reduced parameter subsets show a slight
increase in the training speed compared to the others.
Related papers
- Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - A Metaheuristic for Amortized Search in High-Dimensional Parameter
Spaces [0.0]
We propose a new metaheuristic that drives dimensionality reductions from feature-informed transformations.
DR-FFIT implements an efficient sampling strategy that facilitates a gradient-free parameter search in high-dimensional spaces.
Our test data show that DR-FFIT boosts the performances of random-search and simulated-annealing against well-established metaheuristics.
arXiv Detail & Related papers (2023-09-28T14:25:14Z) - Active-Learning-Driven Surrogate Modeling for Efficient Simulation of
Parametric Nonlinear Systems [0.0]
In absence of governing equations, we need to construct the parametric reduced-order surrogate model in a non-intrusive fashion.
Our work provides a non-intrusive optimality criterion to efficiently populate the parameter snapshots.
We propose an active-learning-driven surrogate model using kernel-based shallow neural networks.
arXiv Detail & Related papers (2023-06-09T18:01:14Z) - Low-rank extended Kalman filtering for online learning of neural
networks from streaming data [71.97861600347959]
We propose an efficient online approximate Bayesian inference algorithm for estimating the parameters of a nonlinear function from a potentially non-stationary data stream.
The method is based on the extended Kalman filter (EKF), but uses a novel low-rank plus diagonal decomposition of the posterior matrix.
In contrast to methods based on variational inference, our method is fully deterministic, and does not require step-size tuning.
arXiv Detail & Related papers (2023-05-31T03:48:49Z) - Reflected Diffusion Models [93.26107023470979]
We present Reflected Diffusion Models, which reverse a reflected differential equation evolving on the support of the data.
Our approach learns the score function through a generalized score matching loss and extends key components of standard diffusion models.
arXiv Detail & Related papers (2023-04-10T17:54:38Z) - An iterative multi-fidelity approach for model order reduction of
multi-dimensional input parametric PDE systems [0.0]
We propose a sampling parametric strategy for the reduction of large-scale PDE systems with multidimensional input parametric spaces.
It is achieved by exploiting low-fidelity models throughout the parametric space to sample points using an efficient sampling strategy.
Since the proposed methodology leverages the use of low-fidelity models to assimilate the solution database, it significantly reduces the computational cost in the offline stage.
arXiv Detail & Related papers (2023-01-23T15:25:58Z) - An Accelerated Doubly Stochastic Gradient Method with Faster Explicit
Model Identification [97.28167655721766]
We propose a novel doubly accelerated gradient descent (ADSGD) method for sparsity regularized loss minimization problems.
We first prove that ADSGD can achieve a linear convergence rate and lower overall computational complexity.
arXiv Detail & Related papers (2022-08-11T22:27:22Z) - Deep Equilibrium Optical Flow Estimation [80.80992684796566]
Recent state-of-the-art (SOTA) optical flow models use finite-step recurrent update operations to emulate traditional algorithms.
These RNNs impose large computation and memory overheads, and are not directly trained to model such stable estimation.
We propose deep equilibrium (DEQ) flow estimators, an approach that directly solves for the flow as the infinite-level fixed point of an implicit layer.
arXiv Detail & Related papers (2022-04-18T17:53:44Z) - Learned Turbulence Modelling with Differentiable Fluid Solvers [23.535052848123932]
We train turbulence models based on convolutional neural networks.
These models improve under-resolved low resolution solutions to the incompressible Navier-Stokes equations at simulation time.
arXiv Detail & Related papers (2022-02-14T19:03:01Z) - Probabilistic Circuits for Variational Inference in Discrete Graphical
Models [101.28528515775842]
Inference in discrete graphical models with variational methods is difficult.
Many sampling-based methods have been proposed for estimating Evidence Lower Bound (ELBO)
We propose a new approach that leverages the tractability of probabilistic circuit models, such as Sum Product Networks (SPN)
We show that selective-SPNs are suitable as an expressive variational distribution, and prove that when the log-density of the target model is aweighted the corresponding ELBO can be computed analytically.
arXiv Detail & Related papers (2020-10-22T05:04:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.