An Efficient Machine-Learning Approach for PDF Tabulation in Turbulent
Combustion Closure
- URL: http://arxiv.org/abs/2005.09747v1
- Date: Mon, 18 May 2020 00:13:55 GMT
- Title: An Efficient Machine-Learning Approach for PDF Tabulation in Turbulent
Combustion Closure
- Authors: Rishikesh Ranade, Genong Li, Shaoping Li, Tarek Echekki
- Abstract summary: We introduce an adaptive training algorithm that relies on multi-layer perception (MLP) neural networks for regression and self-organizing maps (SOMs) for clustering data to tabulate using different networks.
The algorithm is validated for the so-called DLR-A turbulent jet diffusion flame using both RANS and LES simulations.
- Score: 0.3277163122167433
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Probability density function (PDF) based turbulent combustion modelling is
limited by the need to store multi-dimensional PDF tables that can take up
large amounts of memory. A significant saving in storage can be achieved by
using various machine-learning techniques that represent the thermo-chemical
quantities of a PDF table using mathematical functions. These functions can be
computationally more expensive than the existing interpolation methods used for
thermo-chemical quantities. More importantly, the training time can amount to a
considerable portion of the simulation time. In this work, we address these
issues by introducing an adaptive training algorithm that relies on multi-layer
perception (MLP) neural networks for regression and self-organizing maps (SOMs)
for clustering data to tabulate using different networks. The algorithm is
designed to address both the multi-dimensionality of the PDF table as well as
the computational efficiency of the proposed algorithm. SOM clustering divides
the PDF table into several parts based on similarities in data. Each cluster of
data is trained using an MLP algorithm on simple network architectures to
generate local functions for thermo-chemical quantities. The algorithm is
validated for the so-called DLR-A turbulent jet diffusion flame using both RANS
and LES simulations and the results of the PDF tabulation are compared to the
standard linear interpolation method. The comparison yields a very good
agreement between the two tabulation techniques and establishes the MLP-SOM
approach as a viable method for PDF tabulation.
Related papers
- Online Parallel Multi-Task Relationship Learning via Alternating Direction Method of Multipliers [37.859185005986056]
Online multi-task learning (OMTL) enhances streaming data processing by leveraging the inherent relations among multiple tasks.
This study proposes a novel OMTL framework based on the alternating direction multiplier method (ADMM), a recent breakthrough in optimization suitable for the distributed computing environment.
arXiv Detail & Related papers (2024-11-09T10:20:13Z) - A GPU-Accelerated Bi-linear ADMM Algorithm for Distributed Sparse Machine Learning [4.258375398293221]
Bi-cADMM is aimed at solving large-scale regularized Sparse Machine Learning problems defined over a network of computational nodes.
Bi-cADMM is implemented within an open-source Python package called Parallel Sparse Fitting Toolbox.
arXiv Detail & Related papers (2024-05-25T15:11:34Z) - Heterogenous Memory Augmented Neural Networks [84.29338268789684]
We introduce a novel heterogeneous memory augmentation approach for neural networks.
By introducing learnable memory tokens with attention mechanism, we can effectively boost performance without huge computational overhead.
We show our approach on various image and graph-based tasks under both in-distribution (ID) and out-of-distribution (OOD) conditions.
arXiv Detail & Related papers (2023-10-17T01:05:28Z) - A Deep Learning algorithm to accelerate Algebraic Multigrid methods in
Finite Element solvers of 3D elliptic PDEs [0.0]
We introduce a novel Deep Learning algorithm that minimizes the computational cost of the Algebraic multigrid method when used as a finite element solver.
We experimentally prove that the pooling successfully reduces the computational cost of processing a large sparse matrix and preserves the features needed for the regression task at hand.
arXiv Detail & Related papers (2023-04-21T09:18:56Z) - Multivariate Probabilistic CRPS Learning with an Application to
Day-Ahead Electricity Prices [0.0]
This paper presents a new method for combining (or aggregating or ensembling) multivariate probabilistic forecasts.
It considers dependencies between quantiles and marginals through a smoothing procedure that allows for online learning.
A fast C++ implementation of the proposed algorithm is provided in the open-source R-Package profoc on CRAN.
arXiv Detail & Related papers (2023-03-17T14:47:55Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Energy-Efficient and Federated Meta-Learning via Projected Stochastic
Gradient Ascent [79.58680275615752]
We propose an energy-efficient federated meta-learning framework.
We assume each task is owned by a separate agent, so a limited number of tasks is used to train a meta-model.
arXiv Detail & Related papers (2021-05-31T08:15:44Z) - Attentional-Biased Stochastic Gradient Descent [74.49926199036481]
We present a provable method (named ABSGD) for addressing the data imbalance or label noise problem in deep learning.
Our method is a simple modification to momentum SGD where we assign an individual importance weight to each sample in the mini-batch.
ABSGD is flexible enough to combine with other robust losses without any additional cost.
arXiv Detail & Related papers (2020-12-13T03:41:52Z) - Coded Stochastic ADMM for Decentralized Consensus Optimization with Edge
Computing [113.52575069030192]
Big data, including applications with high security requirements, are often collected and stored on multiple heterogeneous devices, such as mobile devices, drones and vehicles.
Due to the limitations of communication costs and security requirements, it is of paramount importance to extract information in a decentralized manner instead of aggregating data to a fusion center.
We consider the problem of learning model parameters in a multi-agent system with data locally processed via distributed edge nodes.
A class of mini-batch alternating direction method of multipliers (ADMM) algorithms is explored to develop the distributed learning model.
arXiv Detail & Related papers (2020-10-02T10:41:59Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.